Skip to main content
Glama

Server Details

Pulse: 16 data sources (crypto/DeFi/security/news/finance) via one MCP. Bundles save ~92%.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
atmflow55/datafood-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation4/5

Each tool targets a distinct operation: single query, bulk query, portfolio Q&A, and watch session. Some overlap between datafood_query and datafood_bundle, but descriptions clarify the use cases.

Naming Consistency3/5

All tools use the 'datafood_' prefix and snake_case, but verb placement varies (e.g., 'bundle' as verb vs. 'portfolio_ask' noun-verb). The pattern is somewhat mixed but still readable.

Tool Count4/5

With 4 tools, the server covers core data operations without being too sparse or overloaded. The count fits the apparent scope of a data query and portfolio service.

Completeness3/5

The tools cover querying and monitoring, but missing operations like portfolio sync or resource management. Gaps exist but may be intentional for a read-centric service.

Available Tools

4 tools
datafood_bundleAInspect

Bundle 1-20 cross-niche queries in one call. Saves 50-92% vs. per-API. Free preview accepts up to 5; paid via Stripe session_id or x402 X-Payment header.

ParametersJSON Schema
NameRequiredDescriptionDefault
freeNoIf true, return free 1-row preview (capped at 5 queries)
queriesYes
session_idNoOptional Stripe checkout session_id for paid full results
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes cost savings, free preview limit, and payment methods (Stripe session_id or X-Payment header). No annotations provided, so description carries weight. Lacks details on response structure or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with key info, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain return behavior. It covers payment and limits but omits response format and failure modes. Adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers free and session_id with descriptions; queries array parameters lack descriptions but type enum is explicit. Description adds context on free preview limit and payment, but doesn't detail parameter usage beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Bundle 1-20 cross-niche queries in one call', with a specific verb (bundle) and resource (queries). Differentiates from siblings by highlighting bundling and cost savings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates when to use (bundling multiple queries for cost savings) and mentions free preview (up to 5 queries) and paid options, but doesn't explicitly state when not to use or compare to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

datafood_portfolio_askAInspect

Natural-language Q&A on a Plaid-linked portfolio (read-only). Requires user_id of a previously-synced portfolio.

ParametersJSON Schema
NameRequiredDescriptionDefault
user_idYes
questionYese.g. 'Am I overexposed to tech?'
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states the tool is 'read-only', a key behavioral trait, which is not inferable from the schema or missing annotations. It also implies a natural language interface. However, it does not disclose other traits like authentication needs, rate limits, or error responses, leaving some gaps. Given no annotations, the description adds moderate value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no filler words. It front-loads the purpose and then adds a prerequisite. Every sentence is necessary and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and is a Q&A interaction. The description explains the input but does not describe the output format or behavior (e.g., how responses are structured, possible errors). For a natural-language tool, the response format is important for correctly interpreting results. The description feels incomplete in this aspect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 50% coverage (only 'question' has a description). The description adds meaning by specifying that 'user_id' must be from a 'previously-synced portfolio', which is not present in the schema. This compensates for the missing schema description and helps an agent select appropriate values. The 'question' parameter is aided by the example in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Natural-language Q&A on a Plaid-linked portfolio (read-only)', which specifies the verb (Q&A) and the resource (portfolio). It distinguishes itself from sibling tools by the 'read-only' and natural language aspect, though not explicitly naming alternatives. The purpose is clear and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a prerequisite: 'Requires user_id of a previously-synced portfolio', which tells the agent when this tool can be used. However, it lacks guidance on when not to use it or what alternatives exist (e.g., datafood_query for non-natural-language queries). The usage context is partially defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

datafood_queryAInspect

Fetch a single data type from DataFood. Free 1-row preview, no auth required. Use datafood_bundle for 3+ queries (cheaper).

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuery string. See /api/v1/catalog for per-type examples.
typeYesOne of 42 supported data types (DATAFOOD_CATALOG)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It discloses 'Free 1-row preview' and 'no auth required', which are useful behavioral traits. Could mention rate limits or pagination, but it's fairly transparent for a simple fetch tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. The key points (purpose, preview, auth, alternative) are front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 params, no nested objects, no output schema), the description covers purpose, usage guidance, and key behaviors. It misses output format but that's partly mitigated by the data type enumeration. Overall complete enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already well described. The description adds minimal param-level context beyond the schema (just hints at query capability). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single data type from DataFood' with a specific verb and resource. It distinguishes from sibling tools by noting 'Use datafood_bundle for 3+ queries'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: when to use this tool (single query) and when not to (for 3+ queries, use datafood_bundle which is cheaper). Also mentions free preview and no auth.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

datafood_watch_sessionAInspect

Open a watchable agent session — returns session_id and a public /watch/{id} URL for live observation. Free.

ParametersJSON Schema
NameRequiredDescriptionDefault
intentNoOptional one-line intent string
agent_idNoOptional human-readable agent identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the returns (session_id and URL) and that it is free, but does not disclose any potential side effects, authorization requirements, or session lifecycle details beyond opening.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence plus an extra note about it being free. It is front-loaded and contains no unnecessary words, though it could benefit from slightly more structure (e.g., listing return values explicitly).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two optional parameters, no output schema, and no annotations, the description adequately covers the purpose and return values. However, it does not address session duration, termination, or any rate limits, which are minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and both parameters have descriptions in the schema (intent and agent_id are optional). The tool description adds no additional meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'open a watchable agent session' and specifies the exact return values (session_id and public URL). This distinguishes it from sibling tools like datafood_query which are likely for different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'for live observation' implies the context of use, but there is no explicit guidance on when not to use it or comparison with alternatives. However, the description is clear enough for an agent to infer the appropriate scenario.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.