Skip to main content
Glama

Server Details

Find novel, statistically validated patterns in tabular data — hypothesis-free.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
leap-laboratories/discovery-engine
GitHub Stars
6
Server Listing
Disco

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

14 tools
discovery_accountA
Read-only
Inspect

Check your Disco account status.

Returns current plan, available credits (subscription + purchased), and
payment method status. Use this to verify you have sufficient credits
before running a private analysis.

Args:
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnlyHint annotation, the description adds valuable context: it discloses the return payload structure (plan, credits breakdown, payment status) and critical authentication behavior (fallback to DISCOVERY_API_KEY environment variable). Could mention caching or rate limits for a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structure is logical: purpose → return values → usage guidance → parameter details. The 'Args:' prefix is slightly technical/docstring-style but efficient. Every sentence adds information not present in the structured metadata, with no filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a simple read-only status tool with one optional parameter and an existing output schema, the description is complete. It previews the output schema content (plan, credits), covers authentication patterns, and provides workflow guidance without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting the 'api_key' parameter in the Args section: it provides the format prefix '(disco_...)' and explains the optional nature via the environment variable fallback, which is essential for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Check' and resource 'Disco account status', then clarifies the scope (plan, credits, payment method). It effectively distinguishes from sibling 'discovery_status' by clarifying this returns account financial details, not service health.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the workflow context: 'Use this to verify you have sufficient credits before running a private analysis.' This clearly positions the tool as a prerequisite check for 'discovery_analyze' and implies the alternative paths (purchase credits/subscribe) if insufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_add_payment_methodA
Idempotent
Inspect

Attach a Stripe payment method to your Disco account.

The payment method must be tokenized via Stripe's API first — card details
never touch Disco's servers. Required before purchasing credits
or subscribing to a paid plan.

To tokenize a card, call Stripe's API directly:
POST https://api.stripe.com/v1/payment_methods
with the stripe_publishable_key from your account info.

Args:
    payment_method_id: Stripe payment method ID (pm_...) from Stripe's API.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo
payment_method_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the idempotentHint/destructiveHint annotations, the description adds critical security context (tokenization model, server scope) and external dependency behavior (must call Stripe first). It does not describe specific error states or retry behavior, keeping it from a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear information hierarchy: purpose, security note, prerequisites, external API instructions, and parameter details. No wasted words; every sentence conveys essential integration context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the external API complexity and 0% schema coverage, the description effectively covers the integration workflow and parameter semantics. Since an output schema exists, omitting return value details is appropriate, though a brief mention of what constitutes success could elevate this to a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (titles only), the description fully compensates via the Args section. It adds format prefixes ('pm_...', 'disco_...'), source context ('from Stripe's API'), and fallback behavior ('Optional if DISCOVERY_API_KEY env var is set') that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Attach a Stripe payment method to your Disco account' (verb + resource + scope). It clearly distinguishes this setup step from sibling tools like discovery_purchase_credits and discovery_subscribe by explaining it is a prerequisite for those actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance includes explicit prerequisites ('Required before purchasing credits or subscribing'), security constraints ('card details never touch Disco's servers'), and detailed external workflow instructions (how to tokenize via Stripe's API with specific endpoint).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_analyzeA
Destructive
Inspect

Run Disco on tabular data to find novel, statistically validated patterns.

This is NOT another data analyst — it's a discovery pipeline that systematically
searches for feature interactions, subgroup effects, and conditional relationships
nobody thought to look for, then validates each on hold-out data with FDR-corrected
p-values and checks novelty against academic literature.

This is a long-running operation. Returns a run_id immediately.
Use discovery_status to poll and discovery_get_results to fetch completed results.

Use this when you need to go beyond answering questions about data and start
finding things nobody thought to ask. Do NOT use this for summary statistics,
visualization, or SQL queries.

Public runs are free but results are published. Private runs cost credits.
Call discovery_estimate first to check cost. Private report URLs require
sign-in — tell the user to sign in at the dashboard with the same email
address used to create the account (email code, no password needed).

Call discovery_upload first to upload your file, then pass the returned file_ref here.

Args:
    target_column: The column to analyze — what drives it, beyond what's obvious.
    file_ref: The file reference returned by discovery_upload.
    analysis_depth: Search depth (1=fast, higher=deeper). Default 1.
    visibility: "public" (free) or "private" (costs credits). Default "public".
    title: Optional title for the analysis.
    description: Optional description of the dataset.
    excluded_columns: Optional JSON array of column names to exclude from analysis.
    column_descriptions: Optional JSON object mapping column names to descriptions. Significantly improves pattern explanations — always provide if column names are non-obvious (e.g. {"col_7": "patient age", "feat_a": "blood pressure"}).
    author: Optional author name for the report.
    source_url: Optional source URL for the dataset.
    use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
titleNo
authorNo
api_keyNo
file_refNo
use_llmsNo
source_urlNo
visibilityNopublic
descriptionNo
target_columnYes
analysis_depthNo
excluded_columnsNo
column_descriptionsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate idempotent=false/destructive=true. Description adds critical behavioral context: async nature ('3-15 minutes', 'returns run_id immediately'), cost model ('Public runs are free...Private runs cost credits'), and side effects ('results are published'). Deducting one point because it doesn't explicitly explain what makes it destructive or irreversible beyond the publishing note, and omits failure/cancellation behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Length is substantial but justified by complex async workflow and 11 parameters. Structure is well-organized: value proposition → differentiation → operational characteristics → usage rules → prerequisites → parameter reference. Front-loaded with key constraints. Could be tightened slightly but every section earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (async, monetized, multi-step workflow with 11 parameters), the description is comprehensively complete. It covers authentication (env var fallback), cost estimation prerequisites, privacy implications (public vs private), polling patterns, and detailed parameter guidance. Output schema exists, so omitting return value details is appropriate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (only titles). The Args section compensates by documenting all 11 parameters with rich semantics: target_column explains analytical intent ('what drives it'), analysis_depth gives heuristics ('1=fast, higher=deeper'), column_descriptions provides JSON examples with realistic data. This is exemplary compensation for schema under-documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific, actionable verb ('Run Disco') and clearly defines the resource ('tabular data') and scope ('find novel, statistically validated patterns'). It immediately distinguishes itself from siblings with explicit negation ('NOT another data analyst') and contrasts against standard SQL/visualization tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use ('when you need to go beyond answering questions...finding things nobody thought to ask'), when-NOT-to-use ('Do NOT use this for summary statistics'), and prerequisites ('Call discovery_upload first', 'Use discovery_status to poll'). Names specific sibling alternatives (discovery_estimate, discovery_status, discovery_get_results) for the full workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_estimateA
Read-only
Inspect

Estimate the credit cost of an analysis before running it.

Returns credit cost, whether you have sufficient credits, and whether a free
public alternative exists. Always call this before discovery_analyze for
private runs.

Args:
    file_size_mb: Size of the dataset in megabytes.
    num_columns: Number of columns in the dataset.
    analysis_depth: Search depth (1=fast, higher=deeper). Default 1.
    visibility: "public" (free, results published) or "private" (costs credits).
    use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo
use_llmsNo
visibilityNopublic
num_columnsYes
file_size_mbYes
analysis_depthNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds valuable behavioral context including the sufficiency check (whether you have sufficient credits), the public alternative flag, and the estimation scope (duration in seconds, credit cost). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by return-value summary, usage guideline, and Args documentation. Every sentence serves a distinct purpose. Slightly verbose Args header but necessary given zero schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 6-parameter estimation tool. Covers the credit system mechanics, authentication alternatives (env var fallback), visibility cost model, and defers detailed return structure to the existing output schema. Complete despite 0% input schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, requiring description to carry full semantic load. The Args section documents all 6 parameters with types, formats (disco_...), optionality semantics (optional, improves estimate), and value implications ('public' vs 'private' with cost consequences). Note: description states analysis_depth default is 1 while schema declares 2, a minor schema-description inconsistency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Estimate' with clear resources (cost, time, credit requirements) and scope 'before running an analysis'. Distinguishes from sibling discovery_analyze by positioning itself as the prerequisite estimation step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Always call this before discovery_analyze for private runs', providing clear temporal guidance and naming the sibling alternative. Lacks explicit 'when-not' guidance (e.g., when to skip for public-only workflows), preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_get_resultsA
Read-only
Inspect

Fetch the full results of a completed Disco run.

Returns discovered patterns (with conditions, p-values, novelty scores,
citations), feature importance scores, a summary with key insights, column
statistics, and suggestions for what to explore next.

The response includes a `dashboard_urls` object with direct links to each
page of the interactive report — use these to direct the user to the most
relevant view:
- **summary**: AI-generated overview with key insights, novel findings, and plain-language explanation of the most important findings
- **patterns**: Full list of discovered patterns with conditions, effect sizes, p-values, novelty scores, citations, and interactive visualisations
- **features**: Feature importances, feature statistics and distribution plots, and correlation matrix
- **territory**: Interactive 3D map showing how patterns select different regions of the data

Only call this after discovery_status returns "completed".

Args:
    run_id: The run ID returned by discovery_analyze.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
run_idYes
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnlyHint=true (safe read). Description adds substantial context: detailed response structure (patterns with p-values/novelty scores, feature importance, dashboard_urls object), and specific breakdown of 4 dashboard view types (summary/patterns/features/territory). Lacks only rate limits or error handling details to reach 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow: purpose → output content → dashboard navigation details → prerequisites → params. Front-loaded with the core action. Bullet points for dashboard_urls views are efficient. Slightly verbose but every sentence provides unique information (no redundancy with schema or annotations).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a results-fetching tool with existing output schema, description provides exemplary completeness: explains what 'results' contains (beyond schema), documents prerequisite workflow (critical for async operation), covers authentication, and explains the dashboard navigation system. No gaps for tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no parameter descriptions in JSON schema). Description fully compensates: documents run_id as 'returned by discovery_analyze' (linking to sibling tool), and api_key with format hint 'disco_...' plus env var fallback behavior. Complete parameter semantics coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specificverb+resource 'Fetch the full results of a completed Disco run' and immediately distinguishes from siblings by specifying 'completed' status (vs discovery_status/checks) and 'results' (vs discovery_analyze/starts run). Clear scope and resource identification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit prerequisite stated: 'Only call this after discovery_status returns "completed".' Clear workflow sequencing (analyze → status → get_results). Also provides guidance on using dashboard_urls output to 'direct the user to the most relevant view', indicating how to handle results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_list_plansA
Read-only
Inspect

List available Disco plans with pricing.

No authentication required. Returns all available subscription tiers with credit allowances and pricing. Use this to help users choose a plan.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds crucial behavioral context not present in structured fields: explicitly stating 'No authentication required' and describing the return content ('subscription tiers with credit allowances and pricing').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy. The hierarchy is optimal: purpose first, then auth requirements, then usage context. Every clause conveys necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and an output schema exists (making detailed return value description redundant), the description is complete. It covers the essential non-obvious behavioral constraint (no auth) and the business purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which per the scoring guidelines establishes a baseline score of 4. The description appropriately does not invent parameter documentation where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'List' and identifies the exact resource ('Disco plans with pricing'), distinguishing it from sibling tools like 'discovery_subscribe' or 'discovery_purchase_credits' which perform actions rather than listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance ('Use this to help users choose a plan') indicating when to invoke the tool in the user journey. However, it does not explicitly name alternatives (e.g., 'use discovery_subscribe to actually purchase') or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_loginA
Idempotent
Inspect

Get a new API key for an existing Disco account.

Sends a 6-digit verification code to the email address. Call
discovery_login_verify with the code to receive a new API key.
Use this when you need an API key for an account that already exists
(e.g. the key was lost or this is a new agent session).

Returns 404 if no account exists with this email — use discovery_signup instead.

Args:
    email: Email address of the existing account.
ParametersJSON Schema
NameRequiredDescriptionDefault
emailYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds essential behavioral context beyond annotations: discloses side effects (sends 6-digit verification code via email), explains the two-step authentication flow, and defines specific error semantics (404 = account missing). Annotations only cover safety profile (idempotent/non-destructive).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: front-loaded purpose, followed by mechanism, workflow instruction, usage criteria, error handling with sibling reference, and parameter documentation. Every sentence serves a distinct guidance function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thorough coverage for a multi-step authentication tool: includes purpose, prerequisites, side effects, error handling, and sibling alternatives. Minor gap: lacks rate limiting or verification code expiration details that would complete the operational picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates effectively for 0% schema description coverage by documenting the email parameter as belonging to 'the existing account', adding necessary semantic constraints. Could further improve with format validation hints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Get') + resource ('API key') + scope ('existing Disco account'), clearly distinguishing from signup. Explicitly contrasts with discovery_signup via the 404 error case description.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use criteria ('account that already exists', 'key was lost', 'new agent session') and explicit alternative ('use discovery_signup instead'). Also mandates the workflow sequence ('Call discovery_login_verify with the code').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_login_verifyA
Idempotent
Inspect

Complete login and receive a new API key.

Call this after discovery_login returns {"status": "verification_required"}.
The user receives a 6-digit code by email — pass it here along with the
same email address. Returns a new API key on success.

Args:
    email: Email address used in the discovery_login call.
    code: 6-digit verification code from the email.
ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
emailYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare idempotentHint=true and destructiveHint=false; the description adds valuable behavioral context not in annotations, including the email delivery mechanism, that the code is 6-digits, and that the tool returns a new API key on success. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear front-loading: purpose first, then prerequisite workflow, then mechanism, then return value, followed by parameter documentation. Slight redundancy between 'receive a new API key' (opening) and 'Returns a new API key' (closing), but appropriate length given the need to compensate for empty schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage for an authentication completion tool: explains the full workflow (login → verification → API key), documents both parameters despite schema limitations, and acknowledges the successful return value (API key) while the output schema handles structural details. No gaps for this tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (parameters lack descriptions), but the description fully compensates via the Args section, explaining that 'email' must match the discovery_login call and 'code' is the 6-digit verification code from the email. This provides critical semantic meaning missing from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Complete' with resource 'login' and outcome 'receive a new API key'. It clearly distinguishes this as the second step of a two-step authentication flow, differentiating it from sibling discovery_login (the first step) and discovery_signup_verify (for signup flows).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite condition: 'Call this after discovery_login returns {"status": "verification_required"}'. It also describes the expected user state (receiving 6-digit code by email), providing clear temporal and contextual guidance for when to invoke this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_purchase_creditsA
Destructive
Inspect

Purchase Disco credit packs using a stored payment method.

Credits cost $0.10 each, sold in packs of 100 ($10/pack). Credits are used
for private analyses (public analyses are free). Requires a payment method
on file — use discovery_add_payment_method first.

Args:
    packs: Number of 100-credit packs to purchase. Default 1.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
packsNo
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructive/non-idempotent operations; the description adds critical financial context including exact pricing ('$1.00 each, sold in packs of 20') and authentication requirements. Could explicitly warn that multiple calls incur multiple charges, but covers the economic and permission prerequisites well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Logical flow from purpose → pricing → prerequisites → parameters. No sentences are wasted. The Args section is clearly demarcated. Minor improvement possible by integrating parameter descriptions into prose, but the structure is highly readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a financial transaction tool: covers cost, prerequisites, parameters, and output schema exists (per context signals, so return values needn't be described). Lacks only explicit idempotency warning (multiple calls = multiple charges) to be fully comprehensive given the destructive annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the Args section fully documents both parameters: 'packs' explains the 20-credit pack unit and default value, while 'api_key' specifies the format prefix ('disco_...') and environment variable fallback. Perfect compensation for schema deficiencies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb+resource ('Purchase Disco credit packs') and identifies the mechanism ('using a stored payment method'). Clearly distinguishes from sibling tools like discovery_add_payment_method (prerequisite) and discovery_analyze (consumer of credits).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite ('Requires a payment method on file — use discovery_add_payment_method first') and explains when credits are consumed vs free alternatives ('Credits are used for private analyses (public analyses are free)'). Provides clear decision criteria for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_signupA
Idempotent
Inspect

Create a Disco account and get an API key.

Provide an email address to start the signup flow. If email verification
is required, returns {"status": "verification_required"} — the user will
receive a 6-digit code by email, then call discovery_signup_verify to
complete signup and receive the API key. The free tier (10 credits/month,
unlimited public runs) is active immediately. No authentication required.

Returns 409 if the email is already registered.

Args:
    email: Email address for the new account.
    name: Display name (optional — defaults to email local part).
ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
emailYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish idempotent/destructive safety (idempotentHint=true, destructiveHint=false). The description adds substantial behavioral context beyond these: the multi-step verification flow with 6-digit codes, free tier limitations (10 credits/month), immediate activation status, and specific error code (409). It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear logical flow: purpose → flow mechanism → sibling reference → pricing → prerequisites → error handling → parameters. While longer than minimal, every sentence provides distinct, necessary information for a multi-step signup tool; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool initiates a multi-step authentication flow, the description comprehensively covers the complete user journey (verification_required response, email codes, sibling completion tool), pricing constraints, authentication requirements, and error cases. Since an output schema exists, the description appropriately focuses on flow logic rather than return value structures.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates via the Args section, documenting both 'email' (with context 'for the new account') and 'name' (including the default behavior 'defaults to email local part' which exceeds the schema's empty string default).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with 'Create a Disco account and get an API key'—a specific verb+resource combination. It clearly distinguishes from siblings by explicitly naming 'discovery_signup_verify' as the subsequent step to complete signup, differentiating it from other account tools like 'discovery_account' or 'discovery_analyze'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('start the signup flow') and clearly identifies the sibling alternative ('then call discovery_signup_verify to complete signup'). Also states prerequisites ('No authentication required') and failure conditions ('Returns 409 if the email is already registered'), giving clear guidance on when not to use or what to expect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_signup_verifyA
Idempotent
Inspect

Complete Disco signup using an email verification code.

Call this after discovery_signup returns {"status": "verification_required"}.
The user receives a 6-digit code by email — pass it here along with the
same email address used in discovery_signup. Returns an API key on success.

Args:
    email: Email address used in the discovery_signup call.
    code: 6-digit verification code from the email.
ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
emailYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the idempotent/destructive annotations, the description adds critical behavioral context: the 6-digit code delivery mechanism (email), the requirement to use the same email as the prior call, and the return value (API key). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly front-loaded: purpose in sentence one, invocation trigger in sentence two, parameter sourcing in sentence three, return value in sentence four. The Args section provides clean parameter documentation without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 2-parameter verification tool. Despite the existence of an output schema, it appropriately notes the API key return value and fully explains the two-step authentication workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage (properties lack descriptions), the Args section fully compensates by specifying that email must match the discovery_signup call and that code is the 6-digit value from the email.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Complete') and resource ('Disco signup') plus mechanism ('email verification code'), clearly distinguishing this as the second step of a signup flow versus the sibling discovery_signup tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite condition ('Call this after discovery_signup returns {status: verification_required}') and references the sibling tool by name, providing clear sequencing guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_statusA
Read-only
Inspect

Check the status of a Disco run.

Returns current status and progress details:
- status: "pending" | "processing" | "completed" | "failed"
- job_status: underlying job queue status
- queue_position: position in queue when pending (1 = next up)
- current_step: active pipeline step (preprocessing, training, interpreting, reporting)
- estimated_wait_seconds: estimated queue wait time in seconds (pending only)

Poll this after calling discovery_analyze.
Use discovery_get_results to fetch full results once status is "completed".

Args:
    run_id: The run ID returned by discovery_analyze.
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
run_idYes
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations confirm readOnlyHint=true, the description adds valuable behavioral context: polling pattern, queue position mechanics, pipeline step progression (preprocessing → training → interpreting → reporting), and time estimates. Misses only potential rate limits or error behaviors for invalid run_ids.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure front-loads purpose, followed by return value spec (bulleted list), usage workflow, then parameter details. No redundant text; every sentence provides actionable information. The 'Args:' section efficiently maps to the two parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite existing output schema, the description provides crucial context for interpreting status values and state transitions. Completes the full async job lifecycle documentation (submission → polling → results retrieval) and covers authentication fallback patterns, making it sufficient for complex workflow orchestration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (neither parameter has a description field). The description fully compensates by documenting run_id semantics ('returned by discovery_analyze') and api_key format ('disco_...') plus environment variable fallback behavior, adding all necessary meaning absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Check the status of a Disco run' (verb + resource) and immediately distinguishes this from sibling tools by explicitly mentioning discovery_analyze (the prerequisite) and discovery_get_results (the next step after completion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance: 'Poll this after calling discovery_analyze' and duration expectations ('3–15 minutes'). Explicitly names the alternative tool for the next step: 'Use discovery_get_results to fetch full results once status is "completed"', creating a clear workflow chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_subscribeA
DestructiveIdempotent
Inspect

Subscribe to or change your Disco plan.

Available plans:
- "free_tier": Explorer — free, 10 credits/month
- "tier_1": Researcher — $49/month, 50 credits/month
- "tier_2": Team — $199/month, 200 credits/month

Paid plans require a payment method on file. Credits roll over on paid plans.

Args:
    plan: Plan tier ID ("free_tier", "tier_1", or "tier_2").
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
planYes
api_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructive/idempotent hints; description adds business logic context including credit rollover behavior, pricing tiers, and payment requirements. Does not explicitly clarify the idempotent nature (that setting the same plan twice is safe) or explain immediate vs. delayed plan changes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose statement followed by enumerated plan details and an Args section. All sentences provide necessary value; slightly verbose format (bullet lists) is appropriate for the complexity but 'Args:' syntax is slightly informal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive given the output schema exists (no return documentation needed). Covers pricing, credit mechanics, and authentication alternatives. Minor gap: does not mention billing proration or immediate vs. next-cycle effect timing for plan changes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting allowed enum values ('free_tier', 'tier_1', 'tier_2') for the plan parameter and providing format hints ('disco_...'), optional status, and environment variable fallback logic for api_key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Subscribe' and 'change') with resource ('Disco plan'). Distinguishes from siblings like discovery_list_plans (viewing) and discovery_purchase_credits (one-time purchases) by covering ongoing subscription management, though it does not explicitly name these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides prerequisite guidance ('Paid plans require a payment method on file'), implying discovery_add_payment_method may be needed first. However, it lacks explicit when-to-use/when-not-to-use rules or contrasts with discovery_signup (account creation) versus plan modification.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discovery_uploadAInspect

Upload a dataset file and return a file reference for use with discovery_analyze.

Call this before discovery_analyze. Pass the returned result directly to
discovery_analyze as the file_ref argument.

Provide exactly one of: file_url, file_path, or file_content.

Args:
    file_url: A publicly accessible http/https URL. The server downloads it directly.
              Best option for remote datasets.
    file_path: Absolute path to a local file. Only works when running the MCP server
               locally (not the hosted version). Streams the file directly — no size limit.
    file_content: File contents, base64-encoded. For small files when a URL or path
                  isn't available. Limited by the model's context window.
    file_name: Filename with extension (e.g. "data.csv"), for format detection.
               Only used with file_content. Default: "data.csv".
    api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyNo
file_urlNo
file_nameNodata.csv
file_pathNo
file_contentNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide safety hints (non-destructive, non-idempotent). Description adds valuable behavioral context: server-side download behavior, local-only streaming for file_path, context window limits for file_content, and env var fallback for auth. Minor gap: doesn't describe error handling if multiple inputs provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded purpose statement followed by usage sequence, constraint, and structured Args documentation. Slightly verbose due to necessity of parameter documentation, but every sentence serves a distinct purpose with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with mutual exclusion logic (exactly one file source) and existing output schema, description captures all critical logic: input constraints, auth fallback, size tradeoffs, and sibling tool integration. No need to detail return structure since output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description comprehensively compensates via Args section documenting all 5 parameters: file_url (public accessibility), file_path (local-only constraint, streaming), file_content (base64 encoding, size limits), file_name (format detection purpose), and api_key (env var fallback).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Upload' + resource 'dataset file' + outcome 'return a file reference'. Explicitly names sibling tool 'discovery_analyze' to clarify this is the prerequisite step, distinguishing it from the analysis workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit sequencing instruction ('Call this before discovery_analyze'), output handling directive ('Pass the returned result directly to discovery_analyze'), and input constraint ('Provide exactly one of: file_url, file_path, or file_content') provide complete when/how-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.