Google Big Query
Server Details
The BigQuery remote MCP server is a fully managed service that uses the Model Context Protocol to connect AI applications and LLMs to BigQuery data sources. It provides secure, standardized tools for AI agents to list datasets and tables, retrieve schemas, generate and execute SQL queries through natural language, and analyze data—enabling direct access to enterprise analytics data without requiring manual SQL coding.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolsexecute_sqlADestructiveInspect
Run a SQL query in the project and return the result. Prefer the execute_sql_readonly
tool if possible.
This tool can execute any query that bigquery supports including:
SQL Queries (SELECT, INSERT, UPDATE, DELETE, CREATE, etc.)
AI/ML functions like AI.FORECAST, ML.EVALUATE, ML.PREDICT
Any other query that bigquery supports.
Example Queries:
-- Insert data into a table.
INSERT INTO my_project.my_dataset.my_table (name, age)
VALUES ('Alice', 30);
-- Create a table.
CREATE TABLE my_project.my_dataset.my_table (
name STRING,
age INT64);
-- DELETE data from a table.
DELETE FROM my_project.my_dataset.my_table WHERE name = 'Alice';
-- Create Dataset
CREATE SCHEMA my_project.my_dataset OPTIONS (location = 'US');
-- Drop table
DROP TABLE my_project.my_dataset.my_table;
-- Drop dataset
DROP SCHEMA my_project.my_dataset;
-- Create Model
CREATE OR REPLACE MODEL my_project.my_dataset.my_model
OPTIONS (
model_type = 'LINEAR_REG'
LS_INIT_LEARN_RATE=0.15,
L1_REG=1,
MAX_ITERATIONS=5,
DATA_SPLIT_METHOD='SEQ',
DATA_SPLIT_EVAL_FRACTION=0.3,
DATA_SPLIT_COL='timestamp') AS
SELECT col1, col2, timestamp, label FROM my_project.my_dataset.my_table;
Queries executed using the execute_sql tool will have the job label
goog-mcp-server: true automatically set. Queries are charged to the project specified
in the project_id field.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Required. The query to execute in the form of a GoogleSQL query. | |
| dryRun | No | Optional. If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false. | |
| projectId | Yes | Required. Project that will be used for query execution and billing. |
Output Schema
| Name | Required | Description |
|---|---|---|
| rows | No | An object with as many results as can be contained within the maximum permitted reply size. To get any additional rows, you can call GetQueryResults and specify the jobReference returned above. |
| errors | No | Output only. The first errors or warnings encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful. For more information about error messages, see [Error messages](https://cloud.google.com/bigquery/docs/error-messages). |
| schema | No | The schema of the results. Present only when the query completes successfully. |
| queryId | No | Output only. The ID of the query. |
| jobComplete | No | Whether the query has completed or not. If rows or totalRows are present, this will always be true. If this is false, totalRows will not be available. |
| totalSlotMs | No | Output only. Number of slot ms the user is actually billed for. |
| totalBytesBilled | No | Output only. The total number of bytes billed for the query. Only applies if the project is configured to use on-demand pricing. |
| numDmlAffectedRows | No | Output only. The number of rows affected by a DML statement. |
| totalBytesProcessed | No | Output only. The total number of bytes processed for this query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (destructiveHint: true, readOnlyHint: false). The description adds valuable operational context: automatic job labeling ('goog-mcp-server: true'), billing attribution ('charged to the project'), and specific support for AI/ML functions and DDL operations beyond standard SQL.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is lengthy due to six SQL examples, but every section earns its place: the sibling distinction is front-loaded, capability categories are enumerated, syntax examples prevent parameter errors, and operational metadata (billing/labels) concludes appropriately. Structure uses clear visual breaks.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (mentioned in context signals) and comprehensive annotations, the description appropriately focuses on tool selection logic, capability scope, and operational side-effects rather than return value documentation. Complex tool is well-covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds six concrete query examples demonstrating valid SQL syntax for the `query` parameter, but does not augment `projectId` or `dryRun` beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Run a SQL query' and resource (BigQuery project), then explicitly distinguishes from sibling `execute_sql_readonly` by stating 'Prefer the `execute_sql_readonly` tool if possible,' clearly signaling this tool handles write operations while the sibling handles reads.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit alternative naming (`execute_sql_readonly`) and preference guidance ('if possible'), and lists destructive capabilities (INSERT, DELETE, DROP) implying when to use this tool. However, it does not explicitly state the rule 'use this for writes, use readonly for SELECT-only queries,' leaving slight ambiguity in the 'if possible' phrasing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
execute_sql_readonlyARead-onlyIdempotentInspect
Run a read-only SQL query in the project and return the result. Prefer this tool over
execute_sql if possible.
This tool is restricted to only SELECT statements. INSERT, UPDATE, and DELETE
statements and stored procedures aren't allowed. If the query doesn't include a SELECT
statement, an error is returned. For information on creating queries, see the GoogleSQL
documentation.
Example Queries: -- Count the number of penguins in each island. SELECT island, COUNT(*) AS population FROM bigquery-public-data.ml_datasets.penguins GROUP BY island
-- Evaluate a bigquery ML Model.
SELECT * FROM ML.EVALUATE(MODEL my_dataset.my_model)
-- Evaluate BigQuery ML model on custom data
SELECT * FROM ML.EVALUATE(MODEL my_dataset.my_model,
(SELECT * FROM my_dataset.my_table))
-- Predict using BigQuery ML model:
SELECT * FROM ML.PREDICT(MODEL my_dataset.my_model,
(SELECT * FROM my_dataset.my_table))
-- Forecast data using AI.FORECAST
SELECT * FROM AI.FORECAST(TABLE project.dataset.my_table, data_col => 'num_trips',
timestamp_col => 'date', id_cols => ['usertype'], horizon => 30)
Queries executed using the execute_sql_readonly tool will have the job label
goog-mcp-server: true automatically set. Queries are charged to the project specified
in the project_id field.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Required. The query to execute in the form of a GoogleSQL query. | |
| dryRun | No | Optional. If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false. | |
| projectId | Yes | Required. Project that will be used for query execution and billing. |
Output Schema
| Name | Required | Description |
|---|---|---|
| rows | No | An object with as many results as can be contained within the maximum permitted reply size. To get any additional rows, you can call GetQueryResults and specify the jobReference returned above. |
| errors | No | Output only. The first errors or warnings encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful. For more information about error messages, see [Error messages](https://cloud.google.com/bigquery/docs/error-messages). |
| schema | No | The schema of the results. Present only when the query completes successfully. |
| queryId | No | Output only. The ID of the query. |
| jobComplete | No | Whether the query has completed or not. If rows or totalRows are present, this will always be true. If this is false, totalRows will not be available. |
| totalSlotMs | No | Output only. Number of slot ms the user is actually billed for. |
| totalBytesBilled | No | Output only. The total number of bytes billed for the query. Only applies if the project is configured to use on-demand pricing. |
| numDmlAffectedRows | No | Output only. The number of rows affected by a DML statement. |
| totalBytesProcessed | No | Output only. The total number of bytes processed for this query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent status, the description adds significant operational context: automatic job labeling (`goog-mcp-server: true`), billing destination ('charged to the project specified'), and validation behavior (error on non-SELECT). Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well front-loaded with critical constraints and preferences stated immediately. Examples, while lengthy, are justified for a SQL tool. No wasted sentences. Minor imprecision: references `project_id` field while schema uses `projectId`.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage including syntax restrictions, billing implications, observability labels, and documentation references. Given the presence of output schema and strong annotations, the description provides complete operational context for a read-only query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). The description adds substantial value through five diverse example queries showing ML evaluation and AI forecasting capabilities, enriching the semantic understanding of the `query` parameter beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb+resource ('Run a read-only SQL query') and explicitly distinguishes from sibling tool `execute_sql` by stating 'Prefer this tool over `execute_sql` if possible' and noting the read-only restriction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use ('Prefer this tool over `execute_sql`', restricted to `SELECT` statements) and implicitly defines when to use the alternative (when needing INSERT/UPDATE/DELETE). Clear constraints prevent misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dataset_infoBRead-onlyIdempotentInspect
Get metadata information about a BigQuery dataset.
| Name | Required | Description | Default |
|---|---|---|---|
| datasetId | Yes | Required. Dataset ID of the dataset request. | |
| projectId | Yes | Required. Project ID of the dataset request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field. |
| etag | No | Output only. A hash of the resource. |
| kind | No | Output only. The resource type. |
| tags | No | Output only. Tags for the dataset. To provide tags as inputs, use the `resourceTags` field. |
| type | No | Output only. Same as `type` in `ListFormatDataset`. The type of the dataset, one of: * DEFAULT - only accessible by owner and authorized accounts, * PUBLIC - accessible by everyone, * LINKED - linked dataset, * EXTERNAL - dataset with definition in external metadata catalog, * BIGLAKE_ICEBERG - a Biglake dataset accessible through the Iceberg API, * BIGLAKE_HIVE - a Biglake dataset accessible through the Hive API. |
| access | No | Optional. An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER; If you patch a dataset, then this field is overwritten by the patched dataset's access field. To add entities, you must supply the entire existing access array in addition to any new entities that you want to add. |
| labels | No | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See [Creating and Updating Dataset Labels](https://cloud.google.com/bigquery/docs/creating-managing-labels#creating_and_updating_dataset_labels) for more information. |
| location | No | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
| selfLink | No | Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource. |
| description | No | Optional. A user-friendly description of the dataset. |
| creationTime | No | Output only. The time when this dataset was created, in milliseconds since the epoch. |
| friendlyName | No | Optional. A descriptive name for the dataset. |
| resourceTags | No | Optional. The [tags](https://cloud.google.com/bigquery/docs/tags) attached to this dataset. Tag keys are globally unique. Tag key is expected to be in the namespaced format, for example "123456789012/environment" where 123456789012 is the ID of the parent organization or project resource for this tag key. Tag value is expected to be the short name, for example "Production". See [Tag definitions](https://cloud.google.com/iam/docs/tags-access-control#definitions) for more details. |
| restrictions | No | Optional. Output only. Restriction config for all tables and dataset. If set, restrict certain accesses on the dataset and all its tables based on the config. See [Data egress](https://cloud.google.com/bigquery/docs/analytics-hub-introduction#data_egress) for more details. |
| satisfiesPzi | No | Output only. Reserved for future use. |
| satisfiesPzs | No | Output only. Reserved for future use. |
| catalogSource | No | Output only. The origin of the dataset, one of: * (Unset) - Native BigQuery Dataset * BIGLAKE - Dataset is backed by a namespace stored natively in Biglake |
| datasetReference | No | Required. A reference that identifies the dataset. |
| defaultCollation | No | Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: * 'und:ci': undetermined locale, case insensitive. * '': empty string. Default to case-sensitive behavior. |
| lastModifiedTime | No | Output only. The date when this dataset was last modified, in milliseconds since the epoch. |
| isCaseInsensitive | No | Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE, which means the dataset and its table names are case-sensitive. This field does not affect routine references. |
| maxTimeTravelHours | No | Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set. |
| defaultRoundingMode | No | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. |
| linkedDatasetSource | No | Optional. The source dataset reference when the dataset is of type LINKED. For all other dataset types it is not set. This field cannot be updated once it is set. Any attempt to update this field using Update and Patch API Operations will be ignored. |
| storageBillingModel | No | Optional. Updates storage_billing_model for the dataset. |
| linkedDatasetMetadata | No | Output only. Metadata about the LinkedDataset. Filled out when the dataset type is LINKED. |
| defaultTableExpirationMs | No | Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration with a PATCH request, set to 0. Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property. |
| externalDatasetReference | No | Optional. Reference to a read-only external dataset defined in data catalogs outside of BigQuery. Filled out when the dataset type is EXTERNAL. |
| defaultPartitionExpirationMs | No | This default partition expiration, expressed in milliseconds. When new time-partitioned tables are created in a dataset where this property is set, the table will inherit this value, propagated as the `TimePartitioning.expirationMs` property on the new table. If you set `TimePartitioning.expirationMs` explicitly when creating a table, the `defaultPartitionExpirationMs` of the containing dataset is ignored. When creating a partitioned table, if `defaultPartitionExpirationMs` is set, the `defaultTableExpirationMs` value is ignored and the table will not be inherit a table expiration deadline. |
| externalCatalogDatasetOptions | No | Optional. Options defining open source compatible datasets living in the BigQuery catalog. Contains metadata of open source database, schema or namespace represented by the current dataset. |
| defaultEncryptionConfiguration | No | The default encryption key for all tables in the dataset. After this property is set, the encryption key of all newly-created tables in the dataset is set to this value unless the table creation request or query explicitly overrides the key. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, establishing this is a safe read operation. The description adds 'metadata' context but does not disclose additional behavioral traits like caching, throttling, or what specific metadata properties are returned (though output schema exists). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy. Appropriately sized for a simple two-parameter read operation. Front-loaded with the action verb. Could benefit from slightly more detail to distinguish from siblings without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage, presence of output schema, and complete annotations covering safety properties, the description is sufficient for tool selection. The addition of 'metadata' clarifies the return type adequately given the structured data provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both projectId and datasetId documented. Description does not add parameter semantics beyond the schema, but baseline 3 is appropriate given complete schema documentation. No parameter formats, examples, or validation rules added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' and resource 'metadata information about a BigQuery dataset'. Distinguishes implicitly from sibling list_dataset_ids by focusing on specific dataset metadata rather than listing, and from get_table_info by specifying 'dataset' level. Lacks explicit contrast with alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus list_dataset_ids (which lists datasets) or get_table_info. Does not mention prerequisites like dataset existence or permission requirements despite being a targeted lookup operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_table_infoBRead-onlyIdempotentInspect
Get metadata information about a BigQuery table.
| Name | Required | Description | Default |
|---|---|---|---|
| tableId | Yes | Required. Table ID of the table request. | |
| datasetId | Yes | Required. Dataset ID of the table request. | |
| projectId | Yes | Required. Project ID of the table request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | No | Output only. An opaque ID uniquely identifying the table. |
| etag | No | Output only. A hash of this resource. |
| kind | No | The type of resource ID. |
| type | No | Output only. Describes the table type. The following values are supported: * `TABLE`: A normal BigQuery table. * `VIEW`: A virtual table defined by a SQL query. * `EXTERNAL`: A table that references data stored in an external storage system, such as Google Cloud Storage. * `MATERIALIZED_VIEW`: A precomputed view defined by a SQL query. * `SNAPSHOT`: An immutable BigQuery table that preserves the contents of a base table at a particular time. See additional information on [table snapshots](https://cloud.google.com/bigquery/docs/table-snapshots-intro). The default value is `TABLE`. |
| view | No | Optional. The view definition. |
| labels | No | The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key. |
| schema | No | Optional. Describes the schema of this table. |
| numRows | No | Output only. The number of rows of data in this table, excluding any data in the streaming buffer. |
| location | No | Output only. The geographic location where the table resides. This value is inherited from the dataset. |
| numBytes | No | Output only. The size of this table in logical bytes, excluding any data in the streaming buffer. |
| replicas | No | Optional. Output only. Table references of all replicas currently active on the table. |
| selfLink | No | Output only. A URL that can be used to access this resource again. |
| clustering | No | Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. |
| description | No | Optional. A user-friendly description of this table. |
| creationTime | No | Output only. The time when this table was created, in milliseconds since the epoch. |
| friendlyName | No | Optional. A descriptive name for this table. |
| maxStaleness | No | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. |
| resourceTags | No | Optional. The [tags](https://cloud.google.com/bigquery/docs/tags) attached to this table. Tag keys are globally unique. Tag key is expected to be in the namespaced format, for example "123456789012/environment" where 123456789012 is the ID of the parent organization or project resource for this tag key. Tag value is expected to be the short name, for example "Production". See [Tag definitions](https://cloud.google.com/iam/docs/tags-access-control#definitions) for more details. |
| restrictions | No | Optional. Output only. Restriction config for table. If set, restrict certain accesses on the table based on the config. See [Data egress](https://cloud.google.com/bigquery/docs/analytics-hub-introduction#data_egress) for more details. |
| numPartitions | No | Output only. The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| expirationTime | No | Optional. The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables. |
| tableReference | No | Required. Reference describing the ID of this table. |
| cloneDefinition | No | Output only. Contains information about the clone. This value is set via the clone operation. |
| streamingBuffer | No | Output only. Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer. |
| defaultCollation | No | Optional. Defines the default collation specification of new STRING fields in the table. During table creation or update, if a STRING field is added to this table without explicit collation specified, then the table inherits the table default collation. A change to this field affects only fields added afterwards, and does not alter the existing fields. The following values are supported: * 'und:ci': undetermined locale, case insensitive. * '': empty string. Default to case-sensitive behavior. |
| lastModifiedTime | No | Output only. The time when this table was last modified, in milliseconds since the epoch. |
| managedTableType | No | Optional. If set, overrides the default managed table type configured in the dataset. |
| materializedView | No | Optional. The materialized view definition. |
| numLongTermBytes | No | Output only. The number of logical bytes in the table that are considered "long-term storage". |
| numPhysicalBytes | No | Output only. The physical size of this table in bytes. This includes storage used for time travel. |
| tableConstraints | No | Optional. Tables Primary Key and Foreign Key information |
| timePartitioning | No | If specified, configures time-based partitioning for this table. |
| rangePartitioning | No | If specified, configures range partitioning for this table. |
| snapshotDefinition | No | Output only. Contains information about the snapshot. This value is set via snapshot creation. |
| defaultRoundingMode | No | Optional. Defines the default rounding mode specification of new decimal fields (NUMERIC OR BIGNUMERIC) in the table. During table creation or update, if a decimal field is added to this table without an explicit rounding mode specified, then the field inherits the table default rounding mode. Changing this field doesn't affect existing fields. |
| partitionDefinition | No | Optional. The partition information for all table formats, including managed partitioned tables, hive partitioned tables, iceberg partitioned, and metastore partitioned tables. This field is only populated for metastore partitioned tables. For other table formats, this is an output only field. |
| biglakeConfiguration | No | Optional. Specifies the configuration of a BigQuery table for Apache Iceberg. |
| numTotalLogicalBytes | No | Output only. Total number of logical bytes in the table or materialized view. |
| tableReplicationInfo | No | Optional. Table replication info for table created `AS REPLICA` DDL like: `CREATE MATERIALIZED VIEW mv1 AS REPLICA OF src_mv` |
| numActiveLogicalBytes | No | Output only. Number of logical bytes that are less than 90 days old. |
| numTotalPhysicalBytes | No | Output only. The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| materializedViewStatus | No | Output only. The materialized view status. |
| numActivePhysicalBytes | No | Output only. Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| requirePartitionFilter | No | Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. |
| encryptionConfiguration | No | Custom encryption configuration (e.g., Cloud KMS keys). |
| numCurrentPhysicalBytes | No | Output only. Number of physical bytes used by current live data storage. This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| numLongTermLogicalBytes | No | Output only. Number of logical bytes that are more than 90 days old. |
| numLongTermPhysicalBytes | No | Output only. Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| externalDataConfiguration | No | Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. |
| numTimeTravelPhysicalBytes | No | Output only. Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes. |
| externalCatalogTableOptions | No | Optional. Options defining open source compatible table. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, but the description adds no behavioral context beyond this safety profile. It does not specify what metadata is returned (schema, properties, labels), error behavior if the table is missing, or any BigQuery-specific access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero redundancy; every word earns its place. The description is front-loaded with the action and resource, appropriately terse for a straightforward metadata retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (covering return values), rich annotations (covering safety), and complete parameter documentation, the description provides minimal but adequate coverage. However, it lacks error handling context or BigQuery-specific usage notes expected for a 3-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation already carries the parameter semantics. The description adds no parameter guidance (format examples, constraint relationships), warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a clear verb ('Get') and resource ('BigQuery table') with specific scope ('metadata information'). Implicitly distinguishes from siblings like execute_sql (which runs queries) and list_table_ids (which enumerates), though it does not explicitly clarify when to choose this over get_dataset_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like get_dataset_info or list_table_ids, nor does it mention prerequisites such as requiring the table to exist or having BigQuery read permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_dataset_idsARead-onlyIdempotentInspect
List BigQuery dataset IDs in a Google Cloud project.
| Name | Required | Description | Default |
|---|---|---|---|
| projectId | Yes | Required. Project ID of the dataset request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| datasets | No | The datasets that matched the request. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds domain context (BigQuery, Google Cloud) not in annotations, but omits behavioral details like pagination, permission requirements, or error handling for invalid project IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 9 words. Action verb front-loaded, zero redundancy. Efficiently conveys exact scope and resource type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read operation: output schema exists (so return structure needn't be described), annotations cover behavioral safety, and 100% schema coverage documents the single required parameter. Description successfully identifies the specific resource.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with projectId well-documented. Description mentions 'Google Cloud project' which aligns with the parameter, but adds no additional semantic detail about format, validation, or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' + resource 'BigQuery dataset IDs' + scope 'Google Cloud project'. Distinguishes from sibling list_table_ids (tables vs datasets) and get_dataset_info (IDs vs full metadata) by specifying it returns IDs only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specificity ('IDs' suggests use when only identifiers needed, not full metadata), but lacks explicit when-to-use guidance or comparison to siblings like get_dataset_info or list_table_ids.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_table_idsBRead-onlyIdempotentInspect
List table ids in a BigQuery dataset.
| Name | Required | Description | Default |
|---|---|---|---|
| datasetId | Yes | Required. Dataset ID of the table request. | |
| projectId | Yes | Required. Project ID of the table request. |
Output Schema
| Name | Required | Description |
|---|---|---|
| tables | No | The tables that matched the request. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds domain context (BigQuery) beyond annotations. Annotations already cover safety profile (readOnly, idempotent), so description doesn't need to. However, lacks details on pagination behavior for large datasets or specific BigQuery access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse single sentence with zero redundancy. Appropriately sized for a simple list operation, though arguably too minimal to provide rich context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimally adequate given full schema coverage and output schema presence. Would benefit from noting that results may require pagination for large datasets or that IDs are returned (not full table metadata), but functional for basic tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear descriptions. The description adds no parameter-specific guidance beyond the schema (e.g., no format examples or BigQuery-specific ID patterns), warranting baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (List) + resource (table ids) + domain context (BigQuery dataset). However, lacks explicit differentiation from sibling `list_dataset_ids` or `get_table_info`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like `get_table_info` or `execute_sql`, nor prerequisites like needing valid BigQuery credentials.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!