Skip to main content
Glama

Server Details

Generate and run high performance queries on open and private spatial data at-scale in the cloud

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct, non-overlapping purpose within the spatial data query workflow. For example, list_catalogs_tool, list_databases_tool, and list_tables_tool each target a different level of the data hierarchy, while describe_table_tool, generate_spatial_query_tool, and execute_query_tool handle schema inspection, query generation, and execution respectively. The clear workflow guidance in descriptions further prevents confusion.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with snake_case, such as list_catalogs_tool, describe_table_tool, and execute_query_tool. This uniformity makes the tool set predictable and easy to navigate, with no deviations in naming conventions across the eight tools.

Tool Count5/5

With 8 tools, the set is well-scoped for exploring and querying spatial data in Wherobots catalogs. Each tool serves a specific role in the workflow (e.g., documentation search, hierarchy navigation, schema inspection, query generation/execution), and none appear redundant or unnecessary for the server's purpose.

Completeness5/5

The tool set provides complete coverage for the spatial data query domain, including documentation search, catalog/database/table listing, schema description, query generation, and execution. It supports a full workflow from discovery to analysis with no obvious gaps, as evidenced by the detailed prerequisites and next steps in each tool's description.

Available Tools

8 tools
describe_table_toolDescribe Wherobots Catalog TableA
Read-only
Inspect

Describe a specific table.

⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries.

📋 PREREQUISITES:

  • Call search_documentation_tool first

  • Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table

📋 NEXT STEPS after this tool:

  1. Use generate_spatial_query_tool to create SQL using the schema

  2. Use execute_query_tool to test the query

This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis.

Parameters

catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically)

Returns

TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata.

Example Usage for LLM:

  • When user asks for the schema of a specific table.

  • Example User Queries and corresponding Tool Calls:

  • User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?"

  • Tool Call: describe_table('wherobots', 'default', 'users')

  • User: "Describe the buildings table structure"

  • Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')

ParametersJSON Schema
NameRequiredDescriptionDefault
tableYes
catalogYes
databaseYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
schemaYesTable schema fields
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, and the description aligns with this by describing a retrieval operation ('retrieves the schema'). The description adds valuable context beyond annotations: it emphasizes the workflow importance ('ALWAYS call this before writing queries'), specifies prerequisites and next steps, and provides example usage scenarios. However, it doesn't mention rate limits, authentication needs, or error behaviors, which keeps it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (⚠️ WORKFLOW, 📋 PREREQUISITES, 📋 NEXT STEPS, Parameters, Returns, Example Usage) and uses bullet points efficiently. It's appropriately sized but includes some redundancy (e.g., repeating 'describe a specific table' and schema retrieval). Every sentence adds value, though it could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 required parameters, 0% schema coverage, read-only operation) and the presence of an output schema (TableDescriptionOutput), the description is complete. It covers purpose, usage guidelines, parameters with examples, workflow integration, and return value explanation. The output schema handles return details, so the description doesn't need to elaborate further on the 'schema' field.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that parameters are 'catalog', 'database', and 'table' names, and provides concrete examples (e.g., 'wherobots', 'default', 'users'). This adds clear meaning beyond the bare schema. However, it doesn't detail constraints like valid catalog/database names or length limits, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'retrieves the schema of a specified table, including column names and types' and 'describe a specific table.' It distinguishes from siblings by focusing on schema retrieval rather than listing (list_tables_tool), querying (execute_query_tool), or searching documentation (search_documentation_tool). The verb 'describe' and resource 'table' are specific and clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'ALWAYS call this before writing queries that reference a table' and 'Understanding the schema is essential for writing correct SQL queries.' It also specifies prerequisites (call search_documentation_tool first, use list_* tools to find the table) and next steps (use generate_spatial_query_tool and execute_query_tool), clearly differentiating it from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

execute_query_toolExecute SQL QueryAInspect

Run the SQL query on a Wherobots catalog.

⚠️ WORKFLOW: ALWAYS test queries with this tool before generating code. Use limit=10 or limit=100 for initial testing to validate query logic.

📋 PREREQUISITES (required for reliable results):

  • search_documentation_tool: Understand correct SQL syntax

  • describe_table_tool: Verify table schema and column names

  • generate_spatial_query_tool: Generate the SQL (recommended)

📋 NEXT STEPS after this tool:

  • If query SUCCEEDS: Increase limit or remove it for full results

  • If query FAILS: Use sql_debug_workflow prompt, check schema, retry

  • Only generate application code AFTER SQL is validated and working

This tool allows users to execute SQL queries against the Wherobots catalog. It is typically used for data retrieval and analysis. Supports pagination for large result sets.


Remember:

  • Ensure the output can be serialized to JSON.

  • If query execution fails, analyze the error and provide a meaningful message to the user.

  • Use limit and offset parameters for large result sets to avoid timeouts.

  • Queries have a maximum execution time (configurable via settings or x-query-timeout header).

  • The runtime ID can be customized via the x-runtime-id header (default: tiny).

  • The runtime region can be customized via the x-runtime-region header (default: aws-us-west-2).

  • The server sends periodic heartbeat messages during long-running queries to keep the connection alive.

  • If the client disconnects, the query will be cancelled to avoid wasting resources.


Parameters

query : str The SQL query to be executed. ctx : Context FastMCP context (injected automatically) limit : int | None Optional maximum number of rows to return (default: 1000, max: 10000) offset : int | None Optional number of rows to skip for pagination (default: 0)

Returns

QueryExecutionOutput A structured object containing the results of the query execution. - 'success': Whether the query executed successfully - 'row_count': Number of rows returned - 'data': Query results as a list of dictionaries - 'query': The executed SQL query - 'error': Whether an error occurred - 'error_type': Type of error (if any) - 'error_message': Error message (if any) - 'execution_status': Status of execution

Example Usage for LLM:

  • When user asks to run a specific SQL query.

  • Example User Queries and corresponding Tool Calls:

  • User: "Run the following SQL query: SELECT * FROM buildings WHERE state = 'California';"

  • Tool Call: execute_query('SELECT * FROM buildings WHERE state = 'California';')

  • User: "Execute this query with only the first 100 results"

  • Tool Call: execute_query('SELECT * FROM large_table;', limit=100)

  • User: "Get the next page of results"

  • Tool Call: execute_query('SELECT * FROM large_table;', limit=100, offset=100)

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
offsetNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesQuery result data
errorNoWhether an error occurred
queryYesThe executed SQL query
successNoWhether the query execution was successful
next_stepNoRecommended next action based on results or errors.
row_countYesNumber of rows returned
error_typeNoType of error if any
error_messageNoError message if any
execution_statusNoExecution status
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide a title, so the description carries full burden. It extensively documents behavioral traits: workflow recommendations (test with limits), prerequisites, next steps, pagination support, JSON serialization requirement, error handling, timeout avoidance, configurable execution time/runtime/region via headers, heartbeat messages, and query cancellation on disconnect. This goes far beyond basic execution description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but lengthy with multiple sections (workflow, prerequisites, next steps, general description, reminders). While well-structured with icons and headings, it could be more front-loaded; the core purpose appears after several workflow instructions. Some redundancy exists (e.g., pagination mentioned multiple times).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (SQL execution with multiple behavioral considerations), no annotations, and 0% schema coverage, the description provides exceptional completeness. It covers workflow, prerequisites, next steps, parameters, behavioral details, and includes an output schema description. The extensive documentation fully compensates for missing structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It explains the purpose of limit ('for initial testing,' 'avoid timeouts') and offset ('for pagination'), and provides example usage showing how parameters work. However, it doesn't explicitly mention the 'ctx' parameter or provide format details for the 'query' parameter beyond SQL.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'execute SQL queries against the Wherobots catalog' and mentions 'data retrieval and analysis,' providing a specific verb (execute/run) and resource (SQL queries on Wherobots catalog). However, it doesn't explicitly differentiate from sibling tools like generate_spatial_query_tool or search_documentation_tool beyond mentioning them as prerequisites.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('ALWAYS test queries with this tool before generating code'), prerequisites (listing three sibling tools), and next steps based on query success/failure. It also distinguishes this as the execution step versus generation or documentation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_spatial_query_toolGenerate Spatial SQL QueryA
Read-only
Inspect

Generate a spatial query based on the provided content.

⚠️ WORKFLOW: Call this ONLY after exploring docs and catalog. For best results, ensure you have already:

  1. Called search_documentation_tool for relevant spatial functions

  2. Used catalog tools to identify available tables

  3. Called describe_table_tool for tables you want to query

📋 PREREQUISITES (strongly recommended):

  • search_documentation_tool: Understand available spatial functions

  • list_catalogs_tool / list_tables_tool: Find relevant tables

  • describe_table_tool: Know the schema of tables you'll query

📋 NEXT STEPS after this tool:

  1. Use execute_query_tool with limit=10 to TEST the query first

  2. Iterate on the query if results are incorrect

  3. Only generate application code AFTER SQL is validated

This tool allows user to translate their request into a spatial query.

Parameters

user_prompt : str The user's request or description of the spatial query they want to generate. ctx : Context FastMCP context (injected automatically)

Returns

QueryGenerationSummaryOutput A structured object containing the generated spatial query.

Example Usage for LLM:

  • When user asks to generate a spatial query based on their request.

  • When a user asks for information, statistics or analysis on data that is in tables within one or more of the catalogs they have access to in Wherobots.

  • Example User Queries and corresponding Tool Calls:

  • User: "Generate a SQL query to count buildings in California using Overture data."

  • Tool Call: generate_spatial_query("Generate a SQL query to count buildings in California using Overture data.")

  • User: "How can I find all parks within 5km of downtown Seattle?"

  • Tool Call: generate_spatial_query("How can I find all parks within 5km of downtown Seattle?")

ParametersJSON Schema
NameRequiredDescriptionDefault
user_promptYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryYesThe generated spatial query.
next_stepNoRecommended next action in the workflow.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating it's a safe read operation. The description adds valuable behavioral context beyond annotations: it emphasizes a workflow dependency (must call other tools first), recommends testing with execute_query_tool, and warns to iterate if results are incorrect. This enhances transparency about usage constraints and best practices without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose but includes extensive sections (workflow, prerequisites, next steps, examples) that, while helpful, make it verbose. Some redundancy exists (e.g., repeating example usage). It could be more streamlined while retaining key guidance, as not every sentence adds unique value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (generating SQL queries), the description is complete: it covers purpose, workflow, prerequisites, next steps, and examples. With annotations indicating read-only safety and an output schema (QueryGenerationSummaryOutput) handling return values, no additional explanation of behavior or outputs is needed. It adequately addresses the context provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter documentation. The description adds some semantics by explaining user_prompt as 'The user's request or description of the spatial query they want to generate' and providing example usage, but it doesn't detail format expectations (e.g., natural language vs. structured input) or constraints. This partially compensates but leaves gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'allows user to translate their request into a spatial query' and 'Generate a spatial query based on the provided content,' which specifies the verb (generate/translate) and resource (spatial query). It distinguishes from siblings like execute_query_tool (which runs queries) and search_documentation_tool (which searches docs), but could be more precise about the output format beyond 'structured object.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: 'Call this ONLY after exploring docs and catalog' with a detailed workflow listing prerequisites (search_documentation_tool, catalog tools, describe_table_tool) and next steps (execute_query_tool for testing). It clearly distinguishes this tool from alternatives by positioning it as a query generation step in a larger process.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_catalogs_toolList Wherobots CatalogsA
Read-only
Inspect

List all catalogs available for users.

⚠️ WORKFLOW: Call this after search_documentation_tool. Start your catalog exploration here to discover available data sources.

📋 PREREQUISITES:

  • Call search_documentation_tool first to understand what you're looking for

📋 NEXT STEPS after this tool:

  1. Use list_databases_tool to explore databases in a catalog

  2. Use list_tables_tool to find tables in a database

  3. Use describe_table_tool to get table schemas

This tool retrieves all available catalogs accessible with the provided API key. It is typically used as the first step in exploring the data hierarchy. This tool will fetch both managed catalogs, as well as external catalogs. (e.g., on Databricks)

Parameters

ctx : Context FastMCP context (injected automatically)

Returns

CatalogListOutput A structured object containing catalog information. - 'catalogs': List of catalog names. - 'count': Number of catalogs found.

Example Usage for LLM:

  • When user asks for available catalogs.

  • Example User Queries and corresponding Tool Calls:

  • User: "What catalogs are available?"

  • Tool Call: list_catalogs()

  • User: "Show me all the data sources"

  • Tool Call: list_catalogs()

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesTotal number of catalogs.
catalogsYesList of catalog names.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description doesn't contradict this. The description adds valuable behavioral context beyond annotations: it specifies that the tool fetches 'both managed catalogs, as well as external catalogs (e.g., on Databricks)' and mentions it's 'typically used as the first step in exploring the data hierarchy.' This provides useful operational context that annotations alone don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (⚠️ WORKFLOW, 📋 PREREQUISITES, etc.), but it includes redundant information like 'Example Usage for LLM' and example queries that could be condensed. The core information is front-loaded, but some sentences don't earn their place, making it slightly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only operation), the description is complete. It explains purpose, workflow integration, prerequisites, next steps, and behavioral details. With an output schema present, it doesn't need to detail return values, and it adequately covers all necessary context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%. The description doesn't need to explain parameters, but it does mention 'ctx : Context' is 'injected automatically,' which adds clarity about the single parameter's role. For a zero-parameter tool, this exceeds the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all catalogs available for users' and 'retrieves all available catalogs accessible with the provided API key.' It specifies the verb ('list', 'retrieves') and resource ('catalogs'), but doesn't explicitly differentiate from sibling tools like list_databases_tool or list_tables_tool beyond mentioning them in workflow context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Call this after search_documentation_tool' and 'Start your catalog exploration here to discover available data sources.' It also specifies prerequisites ('Call search_documentation_tool first') and next steps (listing databases, tables, etc.), giving clear context for usage versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_databases_toolList Wherobots Catalog DatabasesA
Read-only
Inspect

List all databases in a given catalog.

⚠️ WORKFLOW: Call this after list_catalogs_tool to explore a specific catalog.

📋 PREREQUISITES:

  • Call search_documentation_tool first to understand what you're looking for

  • Call list_catalogs_tool to discover available catalogs

📋 NEXT STEPS after this tool:

  1. Use list_tables_tool to find tables in a database

  2. Use describe_table_tool to get table schemas before writing queries

This tool retrieves all databases within a specified catalog.

Parameters

catalog : str The name of the catalog. ctx : Context FastMCP context (injected automatically)

Returns

DatabaseListOutput A structured object containing database information. - 'catalog': The catalog name. - 'databases': List of database names. - 'count': Number of databases found.

Example Usage for LLM:

  • When user asks for a specific catalog's databases.

  • Example User Queries and corresponding Tool Calls:

  • User: "List all databases in the 'wherobots' catalog."

  • Tool Call: list_databases('wherobots')

  • User: "What databases are in the foursquare catalog?"

  • Tool Call: list_databases('foursquare')

ParametersJSON Schema
NameRequiredDescriptionDefault
catalogYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesTotal number of databases.
catalogYesName of the catalog.
databasesYesList of database names.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description doesn't contradict this. The description adds valuable context about workflow positioning (when to call relative to other tools) and clarifies that it 'retrieves all databases' (implying completeness). However, it doesn't mention potential limitations like pagination or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (workflow, prerequisites, next steps, parameters, returns, examples). However, it's somewhat verbose - the core purpose is stated multiple times, and the example section could be more concise while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, read-only, with output schema), the description is complete. It covers purpose, workflow context, prerequisites, next steps, parameter semantics, and provides concrete examples. The output schema exists, so the description doesn't need to explain return values in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description compensates well. It explains 'catalog : str - The name of the catalog' and provides two concrete usage examples with specific catalog names ('wherobots', 'foursquare'), giving semantic meaning beyond the bare schema. The only parameter is fully documented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List all databases in a given catalog' - a specific verb ('List') and resource ('databases') with scope ('in a given catalog'). It distinguishes from siblings like list_catalogs_tool (which lists catalogs) and list_tables_tool (which lists tables).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided with '⚠️ WORKFLOW: Call this after list_catalogs_tool to explore a specific catalog' and '📋 PREREQUISITES' section naming specific tools to call first. It also includes '📋 NEXT STEPS after this tool' with explicit alternative tools for subsequent actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_hierarchy_toolList Wherobots Managed Catalog HierarchyA
Read-only
Inspect

Get complete hierarchy of catalogs, databases, and tables.

⚠️ WORKFLOW: Use this for a quick overview of all managed catalogs. For external catalogs, use list_catalogs_tool instead.

📋 PREREQUISITES:

  • Call search_documentation_tool first to understand what data you need

📋 NEXT STEPS after this tool:

  1. Use describe_table_tool to get schemas of tables you want to query

  2. Use list_catalogs_tool to discover external catalogs not shown here

This tool provides a comprehensive view of all available assets in the Wherobots system, including their hierarchical relationships. It can be used to retrieve information about all catalogs, list all databases within those catalogs, and enumerate all tables within each database.


IMPORTANT LIMITATION:

  • This tool is being DEPRECATED, but is the only way to get a full hierarchy in one call.

  • This tool ONLY shows catalogs managed within Wherobots.

  • External catalogs (e.g., on Databricks, other cloud platforms) are NOT visible in this hierarchy.

  • If the user mentions specific catalog names that don't appear in the results, they may be external catalogs that need to be accessed differently.

  • ALWAYS call list_catalogs_tool before or after calling this tool.


Parameters

ctx : Context FastMCP context (injected automatically)

Returns

HierarchyListOutput A structured object containing the hierarchy of catalogs, databases, and tables. - 'hierarchy': A dictionary representing the hierarchical structure, where keys are catalog names. Each catalog entry contains a dictionary of its databases. Each database entry includes a list of its tables. Each table entry contains its name. - 'summary': A dictionary providing counts of total catalogs, databases, and tables.

Example Usage for LLM:

  • When user asks for a general overview of data, or specific items across multiple catalogs/databases.

  • Example User Queries and corresponding Tool Calls:

  • User: "List all tables in the 'default' database of the 'wherobots' catalog AND in the 'overture_maps_foundation' database of 'wherobots_open_data'."

  • Tool Call: list_hierarchy()

  • User: "Show me all databases in 'wherobots' and 'wherobots_open_data' catalogs."

  • Tool Call: list_hierarchy()

  • User: "What data is available?"

  • Tool Call: list_hierarchy()

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
summaryYesSummary of the hierarchy.
hierarchyYesCatalog hierarchy.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, but the description adds valuable behavioral context: it discloses the tool is being deprecated, specifies it only shows managed catalogs (not external ones), explains what happens if catalog names don't appear, and recommends always calling list_catalogs_tool before or after. This goes beyond the safety information in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (workflow, prerequisites, next steps, limitations) but contains some redundancy. Sentences like 'It can be used to retrieve information about all catalogs, list all databases within those catalogs, and enumerate all tables within each database' repeat the initial purpose. The example usage section is lengthy but provides practical guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has annotations (readOnlyHint), an output schema (explaining return structure), and 0 parameters, the description provides excellent context. It covers purpose, usage guidelines, limitations, workflow integration, and sibling tool relationships. The output schema handles return values, so the description appropriately focuses on operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't waste space explaining parameters, though it could mention the automatic context injection more explicitly. The focus remains on the tool's behavior rather than parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get complete hierarchy of catalogs, databases, and tables' and 'provides a comprehensive view of all available assets in the Wherobots system, including their hierarchical relationships.' It distinguishes from siblings by specifying this is for 'managed catalogs' only, unlike list_catalogs_tool for external catalogs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Use this for a quick overview of all managed catalogs' and 'For external catalogs, use list_catalogs_tool instead.' It includes prerequisites (call search_documentation_tool first) and next steps (use describe_table_tool, list_catalogs_tool). The 'IMPORTANT LIMITATION' section further clarifies when to use alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tables_toolList Wherobots Catalog TablesA
Read-only
Inspect

List all tables in a given database.

⚠️ WORKFLOW: Call this after list_databases_tool to find tables in a database.

📋 PREREQUISITES:

  • Call search_documentation_tool first

  • Call list_catalogs_tool and list_databases_tool to navigate to the database

📋 NEXT STEPS after this tool:

  1. Use describe_table_tool to get the schema of tables you want to query

  2. Use generate_spatial_query_tool to create SQL using the schema

  3. Use execute_query_tool to test the query

This tool retrieves all tables within a specified database in a catalog. It is used to explore the final level of the data hierarchy before accessing table schemas.

Parameters

catalog : str The name of the catalog. database : str The name of the database. ctx : Context FastMCP context (injected automatically)

Returns

TableListOutput A structured object containing table information. - 'catalog': The catalog name. - 'database': The database name. - 'tables': List of table names. - 'count': Number of tables found.

Example Usage for LLM:

  • When user asks for a specific database's tables.

  • Example User Queries and corresponding Tool Calls:

  • User: "List all tables in the 'default' database of the 'wherobots' catalog."

  • Tool Call: list_tables('wherobots', 'default')

  • User: "What tables are in the overture database?"

  • Tool Call: list_tables('wherobots_open_data', 'overture')

ParametersJSON Schema
NameRequiredDescriptionDefault
catalogYes
databaseYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesTotal number of tables.
tablesYesList of table names.
catalogYesName of the catalog.
databaseYesName of the database.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, which the description doesn't contradict. The description adds valuable context about the tool's role in the data hierarchy and workflow dependencies, though it doesn't mention rate limits, authentication needs, or error behaviors beyond what annotations cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core purpose but includes extensive sections (workflow, prerequisites, next steps, parameters, returns, examples) that add value but reduce conciseness. Some information (like the Returns section) is redundant given the output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, workflow dependencies), the description is highly complete. It covers purpose, usage sequence, parameters, returns, and examples. With annotations and an output schema, no critical gaps remain for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description explicitly defines both parameters (catalog and database) in the Parameters section and provides example usage. However, it doesn't add semantic details like format constraints or examples beyond basic naming, so it partially compensates for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all tables') and resource ('in a given database'), and distinguishes it from siblings by positioning it as the final level in the data hierarchy before accessing table schemas. The title reinforces this purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided with workflow instructions ('Call this after list_databases_tool'), prerequisites (calling search_documentation_tool, list_catalogs_tool, and list_databases_tool first), and next steps (describe_table_tool, generate_spatial_query_tool, execute_query_tool). It clearly establishes when to use this tool in sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_documentation_toolSearch Wherobots DocumentationA
Read-only
Inspect

Search the Wherobots documentation.

⚠️ WORKFLOW: This should be your FIRST tool call for any query task. Before writing queries, ALWAYS search documentation to understand:

  • Available spatial functions and their syntax

  • Best practices and common patterns

  • Example queries for similar use cases

📋 NEXT STEPS after this tool:

  1. Use catalog tools (list_catalogs, list_databases, list_tables) to find data

  2. Use describe_table_tool to understand table schemas

  3. Use generate_spatial_query_tool to create SQL queries

This tool searches the official Wherobots documentation to find relevant information about spatial functions, data formats, best practices, and more. It's useful when users need help understanding Wherobots features or syntax.

Parameters

query : str The search query string ctx : Context FastMCP context (injected automatically) page_size : int | None Optional number of results to return (default: 10)

Returns

DocumentationSearchOutput A structured object containing documentation search results. - 'results': List of DocumentResult objects, each containing: - 'content': The documentation content snippet - 'path': The URL path to the documentation page - 'metadata': Additional metadata about the result

Example Usage for LLM:

  • When user asks about Wherobots features, functions, or syntax

  • When generating queries and need context about spatial functions

  • Example User Queries and corresponding Tool Calls:

  • User: "How do I use ST_INTERSECTS in Wherobots?"

  • Tool Call: search_documentation("ST_INTERSECTS spatial function")

  • User: "What spatial functions are available for distance calculations?"

  • Tool Call: search_documentation("spatial distance functions")

  • User: "How do I connect to Wherobots from Python?"

  • Tool Call: search_documentation("Python connection API")

For the best way to understand how to use spatial SQL in Wherobots, refer to the following Wherobots Documentation:

  • Writing Effective Spatial Queries in Wherobots: https://docs.wherobots.com/develop/write-spatial-queries

  • Writing Effective Spatial Queries in Wherobots complete example sub-section: https://docs.wherobots.com/develop/write-spatial-queries#complete-example:-counting-baseball-stadiums-in-the-us

The complete example demonstrates:

  • Starting the SedonaContext

  • Using wkls where applicable

  • Loading spatial data from Wherobots tables

  • Using the correct schema from wherobots_open_data spatial catalogs

  • Writing spatial SQL queries effectively

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
page_sizeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultsYesList of search results
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it specifies this is for searching 'official Wherobots documentation,' provides workflow sequencing guidance, and references external documentation links. However, it doesn't mention rate limits, authentication needs, or pagination behavior beyond the page_size parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (workflow, next steps, parameters, returns, examples) but is overly verbose. It includes redundant information (repeating the tool's purpose multiple times) and extensive example sections that could be condensed. The external documentation links at the end add value but could be integrated more efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, annotations covering safety, and an output schema defining DocumentationSearchOutput, the description is mostly complete. It explains the tool's role in the workflow, provides usage examples, and references external resources. However, it could better explain how search results are ranked or filtered, and the parameter semantics section has some gaps despite the output schema existing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the schema provides no parameter documentation. The description adds some semantic context: it explains 'query' is a 'search query string' and 'page_size' is 'Optional number of results to return (default: 10).' However, it doesn't provide guidance on query formulation, result ranking, or what 'ctx' parameter does beyond 'injected automatically,' leaving gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Wherobots documentation' and specifies it's for finding 'information about spatial functions, data formats, best practices, and more.' It distinguishes from siblings by focusing on documentation search rather than data catalog exploration or query execution, with explicit differentiation in the workflow section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'This should be your FIRST tool call for any query task' and 'Before writing queries, ALWAYS search documentation.' It offers clear alternatives by listing next steps with specific sibling tools (catalog tools, describe_table_tool, generate_spatial_query_tool) and provides example user queries with corresponding tool calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources