Skip to main content
Glama

Server Details

Query Baselight's public catalog of 70,000+ datasets — finance, demographics, sports, and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource or action: metadata retrieval (dataset, table, user), search (catalog, tables), query execution (execute, get results), and a ping test. The descriptions clearly differentiate them, preventing confusion.

Naming Consistency5/5

All tools follow a consistent `baselight_<verb>_<noun>` pattern with underscores and lowercase. The verbs (get, search, sdk) are distinct and predictable, making the tool surface easy to navigate.

Tool Count5/5

With 9 tools, the set is well-scoped for a data query service. It covers discovery, metadata inspection, query execution, and result retrieval without being overwhelming or sparse.

Completeness5/5

The tool set covers the full typical workflow: searching for datasets/tables, inspecting metadata, executing queries, and fetching results. For a read-only query interface, there are no obvious gaps.

Available Tools

9 tools
baselight_get_dataset_metadataget_dataset_metadataA
Read-only
Inspect

Retrieve detailed schema and metadata for a specific dataset using Baselight format @username.dataset. Use this after discovering datasets to understand their structure before querying. Tables within datasets follow the format @username.dataset.table (always double-quoted identifiers in SQL).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe identifier of the dataset to inspect, should be something like @user.dataset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true, so the description adds value by mentioning the table format and double-quoted identifiers in SQL, which helps the agent understand the data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with no wasted words, front-loading the action and providing immediate clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains when to use the tool and what it returns but lacks detail on the structure of the returned metadata. Given no output schema, more description would be helpful for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter 'id', and the description reinforces its format (@user.dataset) and usage context, adding slight value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves detailed schema and metadata for a specific dataset using the Baselight format @username.dataset. It distinguishes itself from siblings like baselight_get_dataset_tables and baselight_get_table_metadata by focusing on the dataset itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises using this tool after discovering datasets to understand their structure before querying, providing clear context. It does not list alternative tools, but the usage guidance is sufficient given the tool's purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_get_dataset_tablesget_dataset_tablesA
Read-only
Inspect

Retrieve a paginated list of tables from a specific dataset using Baselight format @username.dataset. Use this tool to browse datasets with many tables or to search for specific tables within a dataset. Each page returns up to 100 tables with metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe identifier of the dataset to browse, should be something like @user.dataset
pageNoPage number to retrieve (starts at 1). Each page contains up to 100 tables.
queryNoOptional text search query to filter tables by name or title. If omitted, returns all tables.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true, so the description adds value by disclosing pagination behavior (up to 100 tables per page) and that results are filtered by query. This goes beyond the annotation signals, providing useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two clear sentences. The first states the core purpose and format, the second adds usage guidance and pagination detail. No unnecessary words, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains how to use pagination and search, but lacks details about what 'metadata' includes in the output. Since there is no output schema, the agent cannot fully anticipate the return structure. This is a gap for a tool returning paginated results, though siblings may compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context by explicitly mentioning the dataset ID format '@user.dataset', the page size 'up to 100 tables', and that the query is 'optional text search', enhancing understanding beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'retrieve a paginated list of tables' and specifies the resource 'from a specific dataset using Baselight format @username.dataset'. It differentiates from sibling tools like baselight_search_tables by focusing on a single dataset, but does not explicitly contrast with it, slightly reducing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance to 'browse datasets with many tables or to search for specific tables within a dataset', which indicates appropriate use. However, it does not mention when not to use this tool or alternatives like baselight_search_tables for cross-dataset searches, leaving room for confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_get_table_metadataget_table_metadataA
Read-only
Inspect

Retrieve detailed schema and metadata for a specific table using Baselight format @username.dataset.table. Use this to understand table structure, column types, and constraints before writing SQL queries. Tables must be referenced in SQL with double quotes.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe full identifier of the table to inspect, should be something like @user.dataset.table
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and description's 'retrieve' is consistent. Description adds no additional behavioral context (e.g., error handling, rate limits, data freshness), but annotations cover the key trait.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Action is front-loaded, and all information is relevant and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with no output schema, the description adequately covers purpose, usage hint, and parameter format. It provides enough context for an agent to decide when to invoke it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description reinforces the 'id' parameter format. However, it does not add new meaning beyond the schema's description, meeting the baseline for high coverage with slight reinforcement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'retrieve' and resource 'schema and metadata for a specific table', clearly distinguishing from sibling tools like baselight_get_dataset_metadata (dataset-level) and baselight_get_dataset_tables (list of tables). The format '@username.dataset.table' is explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States when to use: 'before writing SQL queries'. Also gives a formatting rule ('Tables must be referenced in SQL with double quotes'). However, it does not explicitly mention when not to use or list alternatives, though siblings hint at context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_get_userget_userA
Read-only
Inspect

Retrieve detailed information about a Baselight user including their profile bio, website, and public datasets. Use this to understand who owns datasets you're interested in.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesThe username of the user to inspect
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open world. Description adds value by specifying the exact fields retrieved (profile bio, website, public datasets), which the annotations do not cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. Front-loaded with purpose, then usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one parameter and no output schema, the description adequately explains what information is returned. Complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter (username) with schema description already providing meaning. Description does not significantly add beyond schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and the resource 'Baselight user', and specifies the fields returned (profile bio, website, public datasets). It distinguishes from sibling tools like baselight_get_dataset_metadata by focusing on user information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'to understand who owns datasets you're interested in.' No explicit when-not, but context is clear given sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_pingPing TestA
Read-only
Inspect

Simple ping test to verify MCP server is responding

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the description needs to add little beyond that. It confirms the non-destructive nature but does not elaborate on response or other behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no extraneous information. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple ping tool with no parameters, the description is fully adequate. No output schema needed; the tool's behavior is trivial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the description has no additional burden. Baseline score of 4 applies as schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states a simple ping test to verify server responsiveness, using a specific verb and resource. It uniquely distinguishes itself from sibling tools which are all data-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are needed for a ping test; its purpose is self-evident and there are no alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_sdk_get_resultssdk_get_resultsA
Read-only
Inspect

Retrieve results from a previously executed SDK job using the resultId from sdk-query-execute. If the query is complete, returns results immediately. If still pending, polls for up to 1 more minute. Use this after sdk-query-execute returns PENDING status.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe result ID of the executed query (from `sdk-query-execute`).
limitNoNumber of rows to return per page (max 100). Default is 100.
offsetNoRow offset for pagination. Use to fetch subsequent pages (e.g., offset=100 for page 2).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide `readOnlyHint=true` and `openWorldHint=true`, so the description adds value by disclosing the polling mechanism (up to 1 minute) and conditional immediate return. It does not contradict annotations and enhances understanding beyond structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no unnecessary words. It front-loads the purpose and is perfectly sized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the presence of annotations, the description adequately covers behavior (polling, pagination) and usage flow. It does not describe the response structure, but the agents can infer from context and `openWorldHint` annotation. Slightly incomplete for a result retrieval tool, but still well-rounded.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all parameters with descriptions (100% coverage), so baseline is 3. The description adds semantic value by reinforcing that `jobId` comes from `sdk-query-execute` and implicitly explaining pagination via 'offset for subsequent pages'. This exceeds the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve results'), the resource ('a previously executed SDK job'), and explicitly links to the sibling tool 'sdk-query-execute' by mentioning 'resultId from `sdk-query-execute`'. This distinguishes it from other tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage context: 'Use this after `sdk-query-execute` returns PENDING status.' It also explains the tool's behavior in both pending and complete cases, giving the agent clear guidance on when and how to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_sdk_query_executesdk_query_executeAInspect

Execute a SQL query on Baselight and wait for results (up to 1 minute). The query executes and returns the first 100 rows upon completion, or info about a pending query that needs more time. Use DuckDB syntax only, table format "@username.dataset.table" (double-quoted), SELECT queries only (no DDL/DML), no semicolon terminators, use LIMIT not TOP. If query is still PENDING, use sdk-get-results to continue polling. If totalResults > returned rows, use sdk-get-results with offset to paginate.

ParametersJSON Schema
NameRequiredDescriptionDefault
sqlYesThe SQL query to execute. table identifiers should be wrapped in double quotes, like "@user.dataset.table". Only SELECT queries are allowed.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states only SELECT queries are allowed, implying read-only behavior, but annotations have readOnlyHint: false, indicating possible writes. This is a direct contradiction. Otherwise, description details wait time, row limits, and pending behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each essential. Front-loaded with core purpose, followed by constraints and sibling tool references. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all key aspects: execution, waiting, row limits, pending handling, pagination, and syntax rules. Despite no output schema, description explains what is returned (rows or pending info).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic sql description, but the description adds critical syntax constraints (DuckDB, double-quotes, no semicolon, LIMIT not TOP) that are not in the schema, greatly enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (execute a SQL query), the target (Baselight), and adds context about waiting and row limits. It distinguishes from sibling tools like baselight_sdk_get_results for polling and pagination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: use sdk-get-results for pending queries or pagination. Specifies syntax rules (DuckDB, SELECT only, no semicolons, LIMIT vs TOP) and when to opt for alternative tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_search_catalogsearch_catalogA
Read-only
Inspect

Search the catalog for datasets using a text query and filters. Datasets in Baselight have the following format: @username.dataset. Datasets can be public or private — you can search and use all public datasets as well as the user's private datasets. This is typically the first step in the discovery workflow.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of datasets to return (1-20, default 10)
queryYesThe search query string, used for semantic search
categoryNoThe dataset category to filter by
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations include readOnlyHint=true and openWorldHint=true. The description adds behavioral context about dataset format ('@username.dataset') and visibility (public/private), which is useful beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first states the purpose, the second provides essential context and workflow hint. It is concise, front-loaded, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given full schema coverage and no output schema, the description adequately covers purpose, typical usage, and important naming conventions. No additional information is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds useful context like 'semantic search' for query and the dataset format, but does not significantly enhance parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the catalog for datasets using a text query and filters.' It distinguishes from siblings like baselight_search_tables (which searches tables) and baselight_get_dataset_metadata (which retrieves metadata for a specific dataset).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates this is 'typically the first step in the discovery workflow,' providing clear context for when to use it. However, it does not explicitly mention when not to use it or alternative tools, though the siblings are distinct enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

baselight_search_tablessearch_tablesA
Read-only
Inspect

Search for tables using a text query and filters. Tables in Baselight have the following format: @username.dataset.table. Tables are grouped into datasets which can be public or private — you can search and use all public datasets as well as the user's private datasets. Search for tables directly when you are unable to find relevant datasets.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tables to return (1-20, default 10)
queryYesThe search query string, used for semantic search
categoryNoThe dataset category to filter by
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=true, and the description adds context about the table naming convention and dataset public/private distinction, which enhances behavioral transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences: purpose, format explanation, and a usage hint. It is front-loaded and concise with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers search scope (public and private datasets), table naming, and usage hint. Given no output schema and simple parameters, it provides sufficient context for the tool's operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add significant meaning beyond the schema; it mentions 'filters' but the schema already defines the category parameter. The table format and dataset context are useful but not directly parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Search for tables') and resource ('tables'), and explains the table format and dataset grouping. It distinguishes from sibling tools by stating 'Search for tables directly when you are unable to find relevant datasets,' implying a separate dataset-focused sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool (searching for tables when datasets are not found) and explains the scope (public datasets and user's private datasets). However, it does not explicitly state when not to use or list alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources