Skip to main content
Glama

Server Details

Query BigQuery, Snowflake, Redshift & Azure Synapse with natural language

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

20 tools
collect_feedbackAInspect

Collects user feedback on the provided response.

When to use this tool:

  • After providing an analysis, a SQL query, or an important response

  • When you want to know if the response was helpful

  • Naturally suggest: "Was this response helpful? 👍 👎"

Ratings:

  • 'positive': The response was helpful and accurate

  • 'negative': The response was not satisfactory

  • 'neutral': Neither satisfied nor dissatisfied

Categories (optional):

  • 'accuracy': Was the response accurate?

  • 'relevance': Did the response address the question?

  • 'completeness': Was the response complete?

  • 'speed': Was the response time acceptable?

  • 'other': Other feedback

Feedback usage: Feedback is used to improve future responses (RAG, analytics).

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYesRating: 'positive', 'negative', or 'neutral'
commentNoFree-form user comment (optional)
categoryNoFeedback category: 'accuracy', 'relevance', 'completeness', 'speed', 'other' (optional)
tool_nameNoThe name of the tool whose response is being rated (optional)
original_questionNoThe user's original question for which feedback is given (to improve RAG)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide only a title with no safety hints. The description compensates by disclosing data usage ('Feedback is used to improve future responses (RAG, analytics)'), which explains the side effects and persistence model. Does not mention idempotency or failure modes, but covers the critical behavioral aspect of data lifecycle.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with markdown headers organizing content into logical sections (When to use, Ratings, Categories, Feedback usage). Front-loaded with the core purpose. Length is appropriate given the tool's social/UX complexity, though the Ratings and Categories sections partially duplicate schema enums.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a feedback collection tool with 100% schema coverage and no output schema, the description is complete. It covers invocation timing, parameter semantics, UX patterns, and data disposition (RAG/analytics). No gaps remain that would prevent correct agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). The description adds valuable semantic context beyond the schema: it frames ratings as satisfaction levels ('helpful and accurate', 'not satisfactory'), poses categories as evaluative questions ('Was the response accurate?'), and explains the RAG purpose of 'original_question'. This narrative framing aids agent reasoning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence 'Collects user feedback on the provided response' provides a specific verb (collects) and resource (user feedback). It clearly distinguishes this tool from its analytics-focused siblings (execute_query, create_aggregation_view, etc.) by establishing a meta-level feedback function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'When to use this tool:' section with specific triggers ('After providing an analysis, a SQL query...'), includes UX guidance ('Naturally suggest: Was this response helpful? 👍 👎'), and defines the emotional context for soliciting feedback. While no alternatives are listed, none exist in the sibling set.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_aggregation_viewAInspect

Creates a materialized view or stored procedure in the project's BigQuery data warehouse for data pre-aggregation.

When to use this tool:

  • When the user needs to pre-aggregate data from multiple connectors (e.g., cross-channel marketing report)

  • When a query is too slow to run on-demand and benefits from materialization

  • When the user asks to "create a view", "save this as a table", "materialize this query"

Naming rules (enforced):

  • Target dataset MUST be 'quanti_agg' (created automatically if it doesn't exist)

  • Object name MUST start with 'llm_' prefix (e.g., llm_weekly_spend)

  • Format: CREATE MATERIALIZED VIEW quanti_agg.llm_name AS SELECT ...

SQL format:

  • CREATE MATERIALIZED VIEW: for pre-computed aggregation tables

  • CREATE OR REPLACE MATERIALIZED VIEW: to update an existing view

  • CREATE PROCEDURE: for complex multi-step transformations

Example: CREATE MATERIALIZED VIEW quanti_agg.llm_weekly_channel_spend AS SELECT DATE_TRUNC(date, WEEK) as week, channel, SUM(spend) as total_spend FROM prod_google_ads_v2.campaign_stats GROUP BY 1, 2

Limits: Maximum 20 active aggregation views per project.

ParametersJSON Schema
NameRequiredDescriptionDefault
sqlYesThe CREATE MATERIALIZED VIEW or CREATE PROCEDURE SQL statement. Must follow naming rules (llm_ prefix, _agg dataset suffix).
project_idYesThe project folderId (e.g., p57d4af1b)
descriptionNoBusiness description of the aggregation (what it computes, when to use it). Optional but recommended.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide only a title with no safety hints. The description adds critical behavioral constraints: enforced naming rules (quanti_agg dataset, llm_ prefix), automatic dataset creation if missing, the distinction between CREATE and CREATE OR REPLACE operations, and the hard limit of 20 active views per project. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The markdown structure with clear headers (When to use, Naming rules, SQL format, Example, Limits) front-loads critical information. Each section serves a distinct purpose—constraints, usage triggers, and examples—though the overall length is substantial, it avoids redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a complex write operation with strict schema constraints and no output schema, the description comprehensively covers naming enforcement, usage limits, and behavioral side effects (auto-created datasets). Minor gap: it does not describe the return value or error behavior when constraints are violated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description significantly enhances the 'sql' parameter by detailing the exact naming constraints (quanti_agg dataset, llm_ prefix), providing SQL format options (MATERIALIZED VIEW vs. PROCEDURE), and including a concrete code example that demonstrates valid syntax.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence explicitly states the tool creates 'a materialized view or stored procedure in the project's BigQuery data warehouse for data pre-aggregation.' It clearly distinguishes from sibling execute_query (on-demand vs. materialized) and list_aggregation_views (read vs. write operations) by emphasizing persistent storage creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The dedicated 'When to use this tool' section provides explicit trigger phrases ('create a view', 'save this as a table', 'materialize this query') and specific scenarios (cross-channel reports, slow queries benefiting from materialization). This clearly delineates when to use this tool versus execute_query or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_use_caseAInspect

Creates and saves a new use case (reusable analysis).

When to use this tool:

  • When the user asks to "save this analysis", "create a use case", "remember this query"

  • After building a SQL query the user wants to reuse

  • To capitalize on a recurring business analysis

Available scopes:

  • 'member' (default): Personal use case, visible only to you

  • 'project': Shared with the entire project team (requires project_id)

Best practices:

  • Slug: technical identifier in snake_case (e.g., weekly_campaign_performance)

  • Name: human-readable name (e.g., "Weekly Campaign Performance")

  • Description: explain the business context and when to use this analysis

  • SQL template: include the SQL query if it's generic and reusable

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesDisplay name, human-readable (e.g., 'Weekly Campaign Performance')
slugYesUnique identifier in snake_case (e.g., 'weekly_campaign_perf'). No spaces or special characters.
scopeNoVisibility: 'member' (personal, default) or 'project' (shared with the team)
promptNoStructured prompt for use cases that don't rely on SQL (e.g., analysis instructions, business steps, reporting guidelines).
categoryNoCategory to organize use cases (e.g., 'performance', 'attribution', 'budget', 'audience')
project_idNoThe project folderId. Required only if scope='project'.
descriptionNoBusiness description: what problem does this use case solve? When to use it? What metrics are analyzed?
sql_templateNoBigQuery SQL template for the analysis. Use placeholders if needed (e.g., {{start_date}}, {{campaign_id}}).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no behavioral annotations provided (only title), the description carries the full burden. It adequately discloses persistence ('saves'), visibility rules ('member' vs 'project' scope with team sharing implications), and content flexibility (SQL vs prompt-based). However, it lacks details on error handling (e.g., duplicate slug behavior) or idempotency, preventing a top score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with markdown headers that front-load intent and organize supporting details (When to use, Available scopes, Best practices). Slightly verbose but every section provides actionable guidance for tool selection and invocation; no filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 8-parameter creation tool with no output schema, the description comprehensively covers the primary logic branches (scope selection, conditional project_id requirement, SQL vs prompt content modes). Minor gap: does not describe the return value or success confirmation on creation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, setting a baseline of 3. The description adds valuable best-practice guidance beyond the schema: concrete examples for slug format (weekly_campaign_performance), usage guidance for SQL templates ('if it's generic and reusable'), and clarifies the relationship between scope='project' and the project_id requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action verb ('Creates and saves') and clear resource ('use case'), immediately clarified as a 'reusable analysis'. This effectively distinguishes it from siblings like execute_query (which runs queries) and update_use_case/delete_use_case (which manage existing ones).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit '**When to use this tool:**' section with specific trigger phrases ('save this analysis', 'remember this query') and contextual cues ('After building a SQL query the user wants to reuse'). This clearly signals when to select this tool over alternatives like execute_query or get_use_cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_aggregation_viewA
Destructive
Inspect

Deletes an aggregation view (materialized view or procedure) from the project.

When to use this tool:

  • When the user explicitly asks to delete/drop a view

  • To clean up unused or obsolete aggregations

  • When the project has reached the maximum number of views (20)

Warning: This marks the view as dropped in Quanti's tracking. The actual BigQuery object may need manual cleanup.

Tip: Use list_aggregation_views first to get the view ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
view_idYesThe UUID of the aggregation view to delete. Get it from list_aggregation_views.
project_idYesThe project folderId
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the destructiveHint=true annotation, the description adds crucial behavioral context: the operation marks the view as 'dropped in Quanti's tracking' while noting that 'the actual BigQuery object may need manual cleanup.' This clarifies the exact scope of destruction. It also documents the 20-view limit constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with clear visual hierarchy (main action, When to use, Warning, Tip). Every sentence serves a distinct purpose. The warning about BigQuery cleanup is appropriately prominent for a destructive operation. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive 2-parameter tool with no output schema, the description provides adequate context including prerequisites, usage triggers (the 20-view limit), and cleanup warnings. Could be improved by mentioning error handling (e.g., what happens if view_id doesn't exist) or idempotency, but covers the essential behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters including the instruction to get view_id from list_aggregation_views. The description's 'Tip' section reinforces this mapping but does not add significant new semantic meaning beyond what the structured schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Deletes') and clearly identifies the resource ('aggregation view/materialized view or procedure') and scope ('from the project'). It effectively distinguishes from sibling tools like create_aggregation_view (inverse operation) and list_aggregation_views (prerequisite for obtaining IDs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides an explicit 'When to use this tool' section with three specific scenarios, including the concrete constraint of 'maximum number of views (20)'. Includes a 'Tip' explicitly naming list_aggregation_views as a prerequisite, establishing clear workflow ordering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_use_caseA
Destructive
Inspect

Permanently deletes a use case you created.

When to use this tool:

  • When the user explicitly asks to delete a use case

  • To clean up obsolete or duplicate use cases

Warning: This action is irreversible. The use case will be permanently deleted.

Permissions: You can only delete use cases you created.

Tip: Ask for user confirmation before deleting.

ParametersJSON Schema
NameRequiredDescriptionDefault
use_case_idYesThe UUID ID of the use case to delete. Get it from list_my_use_cases or list_project_use_cases.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (destructiveHint: true), adds crucial context: states action is 'irreversible' and 'permanently' deletes, specifies auth constraint 'You can only delete use cases you created', and recommends confirmation workflow. Does not describe cascading effects or error states, but covers primary safety concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with markdown headers (When to use, Warning, Permissions, Tip). Front-loaded with the core action. Every sentence earns its place; no redundancy. Appropriate length for a destructive operation requiring safety warnings.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter deletion tool with no output schema, covers irreversibility, ownership permissions, and confirmation needs. Missing explicit mention of error cases (e.g., 'not found') or potential cascading effects, but sufficiently complete given the tool's narrow scope and existing safety documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the use_case_id parameter fully documented in the schema (type, format, data source). Description does not add parameter syntax details, but with complete schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Permanently deletes a use case you created'—a specific verb (deletes) + resource (use case) combination. It distinguishes from siblings like create_use_case and update_use_case by emphasizing 'permanently' and ownership constraint 'you created'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'When to use this tool' section with bullet points covering explicit user requests and cleanup scenarios. Includes warning about irreversibility, permission constraints, and a tip to ask for confirmation—comprehensive guidance for a destructive operation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

execute_queryA
Read-onlyIdempotent
Inspect

Executes a read-only SQL SELECT query on the project's BigQuery data warehouse. No data modification allowed.

Table format: Use dataset.table (e.g., prod_google_ads_v2.campaign_stats). Do NOT prefix with a project_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
sqlYesThe SQL SELECT query to execute (BigQuery standard SQL). Tables in dataset.table format only.
project_idYesThe project folderId (Quanti identifier, not the BigQuery project)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and idempotentHint; description adds critical BigQuery-specific behavioral constraints not in annotations—specifically the table formatting requirements ('Use dataset.table... Do NOT prefix with a project_id') which prevent common query errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient—three sentences with bold formatting for scanability. Front-loaded with safety constraint ('read-only'), followed by resource, then specific syntactic rules. Zero冗余.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With complete parameter coverage and clear behavioral constraints, description suffices for invocation. However, lacks any indication of return format (rows vs objects), result limits, or timeout behavior, which would be valuable for BigQuery execution given potential for large result sets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds significant value beyond schema: provides concrete table format examples (e.g., 'prod_google_ads_v2.campaign_stats') and explicit negative constraints ('Do NOT prefix') that clarify the sql parameter semantics more vividly than schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Executes' and clear resource 'read-only SQL SELECT query', explicitly scoping to BigQuery. The 'read-only' and 'No data modification allowed' clauses clearly distinguish this from sibling mutation tools like create_aggregation_view, delete_use_case, and update_use_case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'No data modification allowed' establishing clear boundaries for when NOT to use. However, lacks explicit pointer to which sibling tool should be used for writes (e.g., create_aggregation_view) or when to prefer list_* convenience methods over raw SQL.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_helpA
Read-onlyIdempotent
Inspect

Searches the official Quanti documentation (docs.quanti.io) to answer questions about using the platform.

When to use this tool:

  • When the user asks "how to do X in Quanti?", "what is a connector?", "how to configure BigQuery?"

  • When the user needs help configuring or using a connector (Google Ads, Meta, Piano, etc.)

  • To explain Quanti concepts: projects, connectors, prebuilds, data warehouse, tag tracker, transformations

  • When the user asks about the Quanti MCP (setup, overview, semantic layer)

This tool does NOT replace:

  • get_schema_context: to get the actual BigQuery schema for a client project

  • list_prebuilds: to list pre-configured reports for a connector

  • get_use_cases: to find reusable analyses

  • execute_query: to execute SQL

Available topic filters: connectors, data-warehouses, data-management, tag-tracker, mcp-server, transformations

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesThe user's question about Quanti (e.g., 'how to configure Google Ads?', 'what is the lookback window?')
topicNoFilter by documentation section: 'connectors', 'data-warehouses', 'data-management', 'tag-tracker', 'mcp-server', 'transformations'. Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds that it searches 'docs.quanti.io' (external source context) but does not elaborate on behavior like result ranking, empty result handling, or rate limits. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with bold headers front-loading critical information. Bullet points for usage scenarios and exclusions maximize information density. No redundant or wasted sentences despite the length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a documentation search tool of low complexity, the description is complete. It covers purpose, usage boundaries, parameters, and sibling differentiation. Minor gap: does not describe the return format (excerpts vs. full pages), though this is less critical given the tool's straightforward utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by providing concrete query examples ('how to configure Google Ads?', 'what is the lookback window?') and explicitly listing valid topic filter values, reinforcing parameter intent beyond the schema strings.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Searches') and resource ('official Quanti documentation'), clearly stating the tool's function. It distinguishes from siblings through the explicit 'This tool does NOT replace' section naming get_schema_context, list_prebuilds, and others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'When to use this tool' section with concrete scenarios (e.g., 'how to configure BigQuery?', 'what is a connector?') and a 'This tool does NOT replace' section listing specific alternatives. Provides perfect guidance for agent selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_launch_contextA
Read-onlyIdempotent
Inspect

Retrieves the full context of a Quanti launch session. The user has pre-configured an analysis from the Quanti interface and was redirected here with a launch_id. Call this function to get the analysis details to execute (name, prompt or SQL template, project).

ParametersJSON Schema
NameRequiredDescriptionDefault
launch_idYesThe launch session identifier (UUID provided in the initial prompt)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it specifies exactly what data is retrieved ('name, prompt or SQL template, project') and explains the origin context (redirected from Quanti interface). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences total, front-loaded with purpose. First sentence declares the action. Second sentence provides workflow context. Third sentence clarifies return value and execution intent. Zero redundancy—every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively lists the returned data structure ('name, prompt or SQL template, project'). It covers the complete workflow (pre-configuration → redirect → retrieval) for this single-parameter read operation. Nothing critical is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (launch_id fully described). Baseline is 3. The description adds workflow context about the parameter's source ('redirected here with a launch_id', 'provided in the initial prompt'), integrating the UUID into the narrative of how the user arrives at this tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Retrieves') and resource ('full context of a Quanti launch session'). Clearly distinguishes from sibling tools like get_project_context or get_schema_context by specifying the unique workflow: 'pre-configured an analysis from the Quanti interface and was redirected here with a launch_id'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear when-to-use context: when 'the user has pre-configured an analysis from the Quanti interface and was redirected here with a launch_id.' Explains the prerequisite (launch_id from redirect) and implied next step (get details to execute). Lacks explicit 'when not to use' or named alternatives, but the workflow specificity provides strong implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_project_contextA
Read-onlyIdempotent
Inspect

Gets the context of a project (active connectors, available datasets, branding). Use the folderId obtained from list_projects. The response includes a 'branding' object (logo_url, primary_color, secondary_color, tertiary_color, font_family) when configured. Always call this tool before generating a report to apply the project's visual identity.

ParametersJSON Schema
NameRequiredDescriptionDefault
levelNoDetail level (0=minimal, 1=standard, 2=full). Default: 1
project_idYesThe project folderId (e.g., p57d4af1b)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety profile. The description adds value by disclosing what data is retrieved ('active connectors, available datasets'), but does not mention pagination, caching, error conditions, or authentication requirements beyond the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste. The first sentence front-loads the action and scope; the second provides prerequisite context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, flat schema, 100% coverage) and presence of readOnly/idempotent annotations, the description adequately covers purpose, content scope, and usage prerequisites. No output schema exists, but the description reasonably summarizes the return value conceptually.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds meaningful workflow context by specifying that the project_id parameter should be sourced from list_projects ('Use the folderId obtained from list_projects'), providing semantic linkage beyond the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Gets') and resource ('context of a project'), with parenthetical clarification of what the context includes ('active connectors, available datasets'). It references 'list_projects', hinting at workflow relationships with siblings, but does not explicitly contrast its function against all available alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite guidance ('Use the folderId obtained from list_projects'), establishing a clear workflow dependency on the sibling tool. However, it lacks explicit 'when not to use' guidance or named alternatives for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schema_contextA
Read-onlyIdempotent
Inspect

Builds the schema context for generating BigQuery SQL queries. Returns relevant tables with their fields and semantic definitions. Call this function with the user's question before writing SQL.

IMPORTANT for SQL queries: Use ONLY the dataset.table format (e.g., prod_google_ads_v2.campaign_stats). NEVER add a project_id before table names. The full_name field of each table already contains the complete name to use in your queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesThe user's question (e.g., 'What are the campaign spend this month?')
max_tablesNoMaximum number of tables to return. Default: 5
project_idYesThe project folderId (e.g., p57d4af1b)
include_fieldsNoInclude table fields. Default: true
include_semantic_defsNoInclude semantic metric definitions. Default: true
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and idempotentHint, so the description doesn't need to cover safety. It adds valuable behavioral context: it specifies the return structure (tables, fields, semantic definitions) and discloses a critical formatting constraint—'Use ONLY the dataset.table format...NEVER add a project_id'—which prevents query errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with two focused paragraphs: purpose/invocation timing first, critical formatting constraints second. The bold/caps 'IMPORTANT' section is appropriately aggressive to prevent destructive formatting errors. No extraneous sentences, though the formatting warning could be considered slightly visually noisy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description compensates by explaining what gets returned (tables, fields, semantic definitions). It adequately covers the relationship to sibling tools (specifically execute_query) for a 5-parameter tool. Could strengthen by mentioning default behaviors (max_tables: 5) in description text.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description elevates this by providing an example question ('What are the campaign spend this month?') and crucial contextual constraints about the project_id parameter (don't prepend it to table names despite it being a required parameter).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Builds the schema context' and 'Returns relevant tables with their fields,' specifying the resource (BigQuery schema metadata). It distinguishes from execute_query by instructing to call it 'before writing SQL,' though it doesn't explicitly differentiate from similar context tools like get_project_context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance: 'Call this function with the user's question before writing SQL.' This clearly establishes when to invoke the tool in the workflow. However, it lacks explicit 'when not to use' guidance or named alternatives (e.g., when to use get_project_context instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_use_casesA
Read-onlyIdempotent
Inspect

Searches for relevant use cases to answer the user's question. Use cases contain SQL templates and business definitions. Use this function to discover available analyses.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of use cases to return. Default: 5
categoryNoFilter by category (e.g., 'marketing-analytics', 'platform-specific')
questionYesThe user's question or need (e.g., 'How to analyze my campaign performance?')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnly/idempotent safety; description adds valuable domain context that use cases contain 'SQL templates and business definitions' (helps agent understand return value nature). Doesn't explain relevance ranking or result format, so moderate value-add beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose, content explanation, usage guidance. Each earns its place. Slightly qualifying because sentence 3 somewhat overlaps with sentence 1's intent clause, though 'discover' adds the exploration angle.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good domain context explaining what use cases are. Absent output schema means description should hint at return structure—it does by mentioning SQL templates/business definitions. Could strengthen by noting results are ranked by relevance to the question.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description maps 'user's question' to the question parameter semantically, but doesn't add syntax guidance, example values, or relationships between category and question parameters beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Searches' and resource 'use cases'. Explains what use cases contain (SQL templates, business definitions). However, it doesn't explicitly differentiate from sibling 'list_my_use_cases' or 'list_project_use_cases'—it hints at search vs list with 'discover' but lacks explicit contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides 'Use this function to discover available analyses' which implies exploratory usage. However, it lacks explicit when-to-use guidance versus the various list_* siblings or prerequisites for the question parameter. Adequate but not directive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_aggregation_viewsA
Read-onlyIdempotent
Inspect

Lists aggregation views (materialized views and procedures) created for a project.

When to use this tool:

  • When the user asks "what views exist?", "my aggregations", "my materialized views"

  • Before creating a new view to check it doesn't already exist

  • To get the view ID for deletion

Response format: Returns a JSON array with each view's ID, full_name (dataset.name), type, SQL, description, and creation date.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesThe project folderId (e.g., p57d4af1b)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. Since no output schema exists, the description adds significant value by detailing the JSON array response structure (ID, full_name, type, SQL, description, creation date). However, it omits pagination behavior, permission requirements, or performance characteristics for large projects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with bold headers ('When to use this tool:', 'Response format:') creating clear information hierarchy. Purpose statement is front-loaded. Bullet points improve scannability. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter list operation with good annotations, the description is complete. It compensates for the missing output schema by fully describing return fields, provides usage guidance, and distinguishes from sibling mutations. No critical gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with project_id fully documented ('The project folderId (e.g., p57d4af1b)'). Description does not explicitly discuss parameters, but with complete schema coverage, no additional parameter semantics are required in the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Lists' plus resource 'aggregation views (materialized views and procedures)' and scope 'created for a project'. The parenthetical clarification distinguishes this from generic views, and the verb clearly differentiates it from sibling tools create_aggregation_view and delete_aggregation_view.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'When to use this tool:' section with three specific scenarios including natural language triggers ('what views exist?', 'my aggregations') and workflow guidance ('Before creating a new view to check it doesn't already exist', 'To get the view ID for deletion'). Explicitly maps to sibling operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_my_use_casesA
Read-onlyIdempotent
Inspect

Lists your personal use cases (scope: member).

What is a use case? A use case is a reusable analysis you created or saved. It contains a business description and optionally a SQL template.

When to use this tool:

  • When the user asks for "my analyses", "my use cases", "what I saved"

  • Before creating a new use case to check it doesn't already exist

  • To find the ID of a use case to modify or delete

Visibility: These use cases are private and only visible to you.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover read-only/idempotent safety, the description adds crucial behavioral context: the visibility constraints ('private and only visible to you') and the structural contents of returned use cases ('business description and optionally a SQL template').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear markdown headers (**What is**, **When to use**, **Visibility**). Information is front-loaded with the core action, and every sentence serves a specific purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately completes the picture by defining what a use case is (the items being listed) and its visibility constraints. Given the tool's simplicity (zero params) and the existence of sibling tools, this level of description is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per evaluation rules, this establishes a baseline score of 4, which is appropriate as there are no parameter semantics to describe.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Lists' and resource 'your personal use cases', explicitly defining the scope as 'member' to distinguish from the sibling 'list_project_use_cases'. It defines the resource type clearly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'When to use this tool' section with three distinct scenarios (retrieving saved items, duplicate checking, ID retrieval for mutations). While it differentiates implicitly via 'scope: member' and 'private', it does not explicitly name the alternative tool (e.g., list_project_use_cases) for project-scoped queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_prebuildsA
Read-onlyIdempotent
Inspect

Lists pre-configured reports (prebuilds) available for a connector.

What is a prebuild? A prebuild is a standardized report maintained by Quanti for a given connector (e.g., Campaign Stats for Google Ads). It defines the BigQuery table structure (columns, types, metrics) and the associated API query.

When to use this tool:

  • When the user asks "what reports are available for [connector]?"

  • When the user doesn't know which data or metrics exist for a connector

  • BEFORE get_schema_context, to explore available reports for a connector

  • To understand the data structure before writing SQL

Difference with get_schema_context:

  • list_prebuilds → discover which reports/tables EXIST for a connector (catalog)

  • get_schema_context → get the actual BigQuery schema for the client project (effective data)

Response format: Returns a JSON with for each prebuild: its ID, name, description, BigQuery table name, and the list of fields (name, type, description, is_metric). Fields marked is_metric=true are aggregatable metrics (impressions, clicks, cost...), others are dimensions (date, campaign_name...).

SKU examples: googleads, meta, tiktok, tiktok-organic, amazon-ads, amazon-dsp, piano, shopify-v2, microsoftads, prestashop-api, mailchimp, kwanko

ParametersJSON Schema
NameRequiredDescriptionDefault
skuYesThe connector name (e.g., 'googleads', 'meta', 'tiktok', 'piano', 'amazon-ads')
project_idNoThe project folderId (optional). If provided, verifies project access.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and idempotentHint; the description adds significant behavioral context including the complete JSON response structure (fields, types, is_metric flag meaning) and clarifies that the optional project_id parameter 'verifies project access' rather than filtering results. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses markdown headers (**What is a prebuild?**, **When to use**) to structure dense information efficiently. The content is front-loaded with the core purpose, followed by conceptual definitions, usage scenarios, sibling differentiation, response format details, and examples—every section serving a distinct purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description comprehensively compensates by detailing the exact response format including field metadata (name, type, description, is_metric) and explaining the semantic difference between metrics and dimensions. This fully addresses the tool's complexity and return value structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage, the description adds valuable semantic context by enumerating concrete SKU examples (googleads, meta, tiktok-organic, amazon-dsp, etc.) and clarifying the access verification behavior of the optional project_id parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence 'Lists pre-configured reports (prebuilds) available for a connector' provides a specific verb and resource. It explicitly distinguishes from the sibling tool get_schema_context via a dedicated comparison section explaining that list_prebuilds discovers which reports exist while get_schema_context retrieves actual BigQuery schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'When to use this tool:' section with four specific scenarios, including critical sequencing guidance 'BEFORE get_schema_context'. It clearly contrasts alternatives by mapping list_prebuilds to catalog discovery and get_schema_context to effective data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_projectsA
Read-onlyIdempotent
Inspect

Lists all projects accessible by the user. Call this function first to discover available projects.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety. Description adds valuable scope context ('accessible by the user') clarifying authorization-based filtering, and behavioral pattern ('discover') indicating catalog enumeration. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence defines operation, second provides workflow guidance. Information-dense and appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a discovery tool with no output schema. Explains what it returns (projects list) and workflow context. Minor gap: doesn't describe project entity structure/fields returned, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, meeting the baseline score of 4. No parameter documentation required or present.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Lists' + resource 'projects' + scope 'accessible by the user'. The phrase 'discover available projects' clearly positions this as the enumeration/discovery entry point, distinguishing it from sibling retrieval tools like `get_project_context` which imply prior selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: 'Call this function first to discover available projects.' Establishes clear workflow order (discovery before project-specific operations) and implicitly contrasts with tools requiring a project identifier.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_project_use_casesA
Read-onlyIdempotent
Inspect

Lists use cases shared with the project team (scope: project).

When to use this tool:

  • When the user asks for "team analyses", "project use cases"

  • To see what colleagues have shared

  • Before sharing a new use case to avoid duplicates

Visibility: These use cases are visible to all project members.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesThe project folderId (e.g., p57d4af1b). Use list_projects to get IDs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and idempotentHint; description adds valuable visibility context ('visible to all project members') explaining the collaborative/sharing model without repeating annotation hints. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent markdown structure with bold headers ('When to use', 'Visibility'), bullet points for scannable triggers, and zero wasted words. Main action is front-loaded in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a read-only list operation: covers what entities are returned, visibility scope, usage triggers, and prerequisites (need project_id from list_projects). Appropriate given annotations cover safety profile and no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single project_id parameter, which is fully documented in the schema. Description provides baseline adequate coverage by referencing 'project team' contextually but does not add parameter syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Lists' with clear resource 'use cases shared with the project team' and distinguishes scope from siblings like list_my_use_cases via 'project team' and 'colleagues' language, reinforced by the '(scope: project)' notation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'When to use this tool:' section with three concrete scenarios including the specific trigger phrases 'team analyses' and 'project use cases', plus workflow guidance to check before sharing to avoid duplicates (implicitly referencing create_use_case).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_scheduled_queriesA
Read-onlyIdempotent
Inspect

Lists scheduled queries configured in the project's BigQuery.

What is a scheduled query? A scheduled query is a SQL query automatically executed on a defined schedule in BigQuery. It is used to aggregate data, populate reporting tables, or perform recurring transformations.

When to use this tool:

  • When the user asks "what are my scheduled queries?", "my BigQuery pipelines"

  • To diagnose a data issue: verify that a scheduled query is running correctly

  • To audit pipelines configured for a project

  • To check execution frequency or status of a scheduled query

Available filters:

  • dataset: filter by destination dataset (e.g., 'prod_reports')

  • status: filter by status 'active' (enabled) or 'disabled'

Response format: Returns a JSON with for each scheduled query: its name, SQL query, execution schedule, destination dataset, status (active/disabled), and last/next execution dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by status: 'active' (enabled) or 'disabled'. Optional.
datasetNoFilter by destination dataset (e.g., 'prod_reports'). Optional.
project_idYesThe project folderId (e.g., p57d4af1b). Use list_projects to get IDs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint/idempotentHint; description adds critical behavioral context not in structured data: defines 'scheduled query' concept, and crucially documents the JSON response format/fields since no output schema exists. No contradictory information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with markdown headers and clear sectioning. Purpose is front-loaded in the first sentence. Length is justified by the need to explain domain concepts (what is a scheduled query) and compensate for missing output schema, though the 'What is' section adds slight verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully compensates for missing output schema by documenting the JSON response structure and fields. Parameters fully covered by schema (100% coverage). Annotations cover behavioral safety. Combined, the definition provides complete information necessary for invocation and response handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage. Description mentions available filters but largely mirrors schema text (e.g., examples 'prod_reports', 'active'/'disabled'). With complete schema coverage, baseline 3 is appropriate as description provides organizational value but limited semantic extension.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Lists' + resource 'scheduled queries' + scope 'in the project's BigQuery'. Clearly distinguishes from sibling execute_query (which runs queries) and other list tools by specifying 'scheduled queries' as the target resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit '**When to use this tool:**' section with detailed bullet points covering diagnostic, audit, and status-check scenarios. Effectively maps user intents ('what are my BigQuery pipelines?') to the tool. Lacks explicit naming of alternative tools (e.g., 'use execute_query to run ad-hoc queries'), preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_mmmAInspect

Re-runs a Marketing Mix Modeling study previously configured with setup_mmm.

Important: Do NOT call this right after setup_mmm. The first run is automatically triggered by setup_mmm. Use run_mmm only to re-launch an existing study later (e.g., after data refresh or parameter changes).

Prerequisite: Must have called setup_mmm first to obtain an account_id.

Duration: The Meridian fit (MCMC) takes approximately 10-30 minutes depending on data volume. The user will receive an email when results are ready.

Results: Results are written to the project's data warehouse (mmm_channel_summary and mmm_weekly_contributions tables). They can then be queried via execute_query.

ParametersJSON Schema
NameRequiredDescriptionDefault
account_idYesThe account_id returned by setup_mmm
project_idYesThe project folderId
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With minimal annotations provided, the description carries full behavioral burden: it discloses the long-running nature ('10-30 minutes'), async completion signal ('email when results are ready'), and precise side effects (writes to 'mmm_channel_summary' and 'mmm_weekly_contributions' tables).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with bold headers front-loading critical information (Prerequisite, Duration, Results). Every sentence provides essential operational context without redundancy; no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description fully compensates by specifying result locations (specific table names), notification mechanism (email), and follow-up actions (query via execute_query), making it complete for this long-running operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. While the description reinforces the origin of 'account_id' from 'setup_mmm' in the Prerequisite section, this information is already present in the schema property descriptions, adding minimal new semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Launches') and resource ('Marketing Mix Modeling study'), and immediately distinguishes this tool from its sibling 'setup_mmm' by specifying it operates on 'previously configured' studies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite ('Must have called setup_mmm first'), mentions the specific output destination (data warehouse tables), and references the sibling tool 'execute_query' for retrieving results, providing clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

setup_mmmAInspect

Configures a Marketing Mix Modeling (MMM) study for a project.

What is MMM? Marketing Mix Modeling measures the real contribution of each marketing channel (Google Ads, Meta, etc.) on a KPI (leads, revenue, conversions), accounting for external factors (seasonality, holidays, promotions).

Recommended workflow:

  1. Use get_schema_context to discover the project's tables/columns

  2. Generate input SQL queries (KPI, channels, exogenous variables)

  3. Validate each query before calling setup_mmm: Use execute_query to run a COUNT() wrapper on each input query (e.g., SELECT COUNT() FROM ()). If any query returns 0 rows, do NOT include it in setup_mmm — warn the user that the data source is empty and ask whether to proceed without it or fix the query.

  4. Call setup_mmm with the validated SQL queries — the study is automatically launched after setup

  5. Do NOT call run_mmm after setup_mmm: the first run is triggered automatically

Important: run_mmm is only needed to RE-RUN an existing study later, not after initial setup.

Input queries format: Each query must return a "time" column (DATE) and the requested metrics.

  • role="kpi": a "kpi" column (the target KPI)

  • role="channel": "spend" and "impressions" columns + channel_name

  • role="exogenous": columns named after the exogenous variables + columns[]

Granularity: "weekly" is recommended (MMM standard). SQL should aggregate by week.

Important: Adapt the SQL dialect to the project's data warehouse type (BigQuery, Snowflake, Redshift).

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYesAnalysis period end date (YYYY-MM-DD format)
kpi_typeNoKPI type: 'leads', 'revenue', 'conversions'. Determines the Meridian mode (revenue vs non_revenue).
questionNoThe analyst's question (for traceability). E.g., 'What is the ROI of Google Ads vs Meta?'
project_idYesThe project folderId (e.g., p57d4af1b)
start_dateYesAnalysis period start date (YYYY-MM-DD format)
study_nameNoCustom name for this MMM study (used as connector account name). E.g., 'Q1 2025 Channel ROI'. If not provided, defaults to the question or 'MMM Study'.
destinationNoData warehouse type: 'bigquery', 'snowflake', 'redshift'. Default: bigquery.
granularityNoTime granularity: 'weekly' (recommended) or 'daily'. Default: weekly.
input_queriesYesList of input SQL queries. Each element: {role: 'kpi'|'channel'|'exogenous', channel_name?: string, columns?: string[], sql: string, name?: string}
output_tablesNoOutput tables (optional, default: channel_summary + weekly_contributions). Format: {id: string, name: string, source: 'channels'|'weekly'}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With minimal annotations (title only), description carries full burden effectively. Explains MMM concept, SQL dialect requirements (BigQuery/Snowflake/Redshift), column schema constraints (time column requirements), and granularity standards. Could improve by stating whether configuration is persistent or idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear headers (What is MMM?, Recommended workflow, Input queries format). Content is front-loaded with purpose statement. Slightly verbose but every section earns its place by providing necessary domain context for a complex statistical modeling tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 9-parameter configuration tool with no output schema. Covers domain concept (MMM), workflow integration, input validation rules, and data warehouse compatibility. Missing only explicit success/failure behavior description, but validation step implies user review before execution.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds substantial value beyond schema by detailing input_queries structure: specific required columns per role ('kpi', 'channel', 'exogenous'), time column format requirements, and SQL dialect adaptation needs. This contextualizes the raw schema significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb ('Configures') + resource ('Marketing Mix Modeling study'). The 'Recommended workflow' section explicitly distinguishes this from siblings by stating get_schema_context must be used first and run_mmm comes after validation, creating clear sequencing boundaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 5-step workflow including prerequisites ('Use get_schema_context to discover') and successor actions ('Call run_mmm to launch'). The validation step (step 4) implies when-not-to-proceed, offering clear decision points.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_use_caseAInspect

Updates an existing use case that you created.

When to use this tool:

  • To improve the description or SQL of an existing use case

  • To fix an error in a use case

  • To change a use case's category

Permissions: You can only modify use cases you created.

Tip: Use list_my_use_cases or list_project_use_cases first to get the use case ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoNew use case name (optional)
promptNoNew structured prompt (optional)
categoryNoNew category (optional)
descriptionNoNew business description (optional)
use_case_idYesThe UUID ID of the use case to modify. Get it from list_my_use_cases or list_project_use_cases.
sql_templateNoNew SQL template (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With minimal annotations (only title), description carries the burden. It adds critical permission context ('You can only modify use cases you created') and implies mutation. However, it lacks disclosure on failure modes (invalid UUID), idempotency, or return values, which are important behavioral gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with bold headers (**When to use**, **Permissions**, **Tip**) creating scannable sections. Main purpose is front-loaded in the first sentence. Zero redundancy; every line provides actionable guidance not found in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage but no output schema and minimal annotations, the description is nearly complete. It covers prerequisites, permission constraints, and typical update scenarios. Minor gap: no mention of error handling for invalid IDs or the response format, though this is partially mitigated by the prerequisite tip.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds value by functionally grouping parameters in the 'When to use' bullets (e.g., linking 'description or SQL' to specific use cases) and explicitly referencing use_case_id acquisition via sibling tools in the tip, providing workflow context beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Updates' + resource 'use case' + scope 'that you created', clearly distinguishing it from sibling create_use_case (new vs. existing) and delete_use_case. The 'When to use' bullet points further clarify the specific editable fields (description, SQL, category).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'When to use this tool' section lists three specific scenarios (improve description/SQL, fix errors, change category). The 'Tip' explicitly names prerequisite sibling tools (list_my_use_cases, list_project_use_cases) to obtain the required ID, providing clear workflow guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources