pulse-mcp
Server Details
Pulse: 16 data sources (crypto/DeFi/security/news/finance) via one MCP. Bundles save ~92%.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- atmflow55/datafood-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 4 of 4 tools scored.
Each tool targets a distinct operation: single query, bulk query, portfolio Q&A, and watch session. Some overlap between datafood_query and datafood_bundle, but descriptions clarify the use cases.
All tools use the 'datafood_' prefix and snake_case, but verb placement varies (e.g., 'bundle' as verb vs. 'portfolio_ask' noun-verb). The pattern is somewhat mixed but still readable.
With 4 tools, the server covers core data operations without being too sparse or overloaded. The count fits the apparent scope of a data query and portfolio service.
The tools cover querying and monitoring, but missing operations like portfolio sync or resource management. Gaps exist but may be intentional for a read-centric service.
Available Tools
4 toolsdatafood_bundleAInspect
Bundle 1-20 cross-niche queries in one call. Saves 50-92% vs. per-API. Free preview accepts up to 5; paid via Stripe session_id or x402 X-Payment header.
| Name | Required | Description | Default |
|---|---|---|---|
| free | No | If true, return free 1-row preview (capped at 5 queries) | |
| queries | Yes | ||
| session_id | No | Optional Stripe checkout session_id for paid full results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes cost savings, free preview limit, and payment methods (Stripe session_id or X-Payment header). No annotations provided, so description carries weight. Lacks details on response structure or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with key info, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should explain return behavior. It covers payment and limits but omits response format and failure modes. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers free and session_id with descriptions; queries array parameters lack descriptions but type enum is explicit. Description adds context on free preview limit and payment, but doesn't detail parameter usage beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Bundle 1-20 cross-niche queries in one call', with a specific verb (bundle) and resource (queries). Differentiates from siblings by highlighting bundling and cost savings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates when to use (bundling multiple queries for cost savings) and mentions free preview (up to 5 queries) and paid options, but doesn't explicitly state when not to use or compare to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
datafood_portfolio_askAInspect
Natural-language Q&A on a Plaid-linked portfolio (read-only). Requires user_id of a previously-synced portfolio.
| Name | Required | Description | Default |
|---|---|---|---|
| user_id | Yes | ||
| question | Yes | e.g. 'Am I overexposed to tech?' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states the tool is 'read-only', a key behavioral trait, which is not inferable from the schema or missing annotations. It also implies a natural language interface. However, it does not disclose other traits like authentication needs, rate limits, or error responses, leaving some gaps. Given no annotations, the description adds moderate value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no filler words. It front-loads the purpose and then adds a prerequisite. Every sentence is necessary and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema and is a Q&A interaction. The description explains the input but does not describe the output format or behavior (e.g., how responses are structured, possible errors). For a natural-language tool, the response format is important for correctly interpreting results. The description feels incomplete in this aspect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 50% coverage (only 'question' has a description). The description adds meaning by specifying that 'user_id' must be from a 'previously-synced portfolio', which is not present in the schema. This compensates for the missing schema description and helps an agent select appropriate values. The 'question' parameter is aided by the example in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Natural-language Q&A on a Plaid-linked portfolio (read-only)', which specifies the verb (Q&A) and the resource (portfolio). It distinguishes itself from sibling tools by the 'read-only' and natural language aspect, though not explicitly naming alternatives. The purpose is clear and specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite: 'Requires user_id of a previously-synced portfolio', which tells the agent when this tool can be used. However, it lacks guidance on when not to use it or what alternatives exist (e.g., datafood_query for non-natural-language queries). The usage context is partially defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
datafood_queryAInspect
Fetch a single data type from DataFood. Free 1-row preview, no auth required. Use datafood_bundle for 3+ queries (cheaper).
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Query string. See /api/v1/catalog for per-type examples. | |
| type | Yes | One of 42 supported data types (DATAFOOD_CATALOG) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It discloses 'Free 1-row preview' and 'no auth required', which are useful behavioral traits. Could mention rate limits or pagination, but it's fairly transparent for a simple fetch tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. The key points (purpose, preview, auth, alternative) are front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 params, no nested objects, no output schema), the description covers purpose, usage guidance, and key behaviors. It misses output format but that's partly mitigated by the data type enumeration. Overall complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already well described. The description adds minimal param-level context beyond the schema (just hints at query capability). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch a single data type from DataFood' with a specific verb and resource. It distinguishes from sibling tools by noting 'Use datafood_bundle for 3+ queries'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance: when to use this tool (single query) and when not to (for 3+ queries, use datafood_bundle which is cheaper). Also mentions free preview and no auth.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
datafood_watch_sessionAInspect
Open a watchable agent session — returns session_id and a public /watch/{id} URL for live observation. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| intent | No | Optional one-line intent string | |
| agent_id | No | Optional human-readable agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the returns (session_id and URL) and that it is free, but does not disclose any potential side effects, authorization requirements, or session lifecycle details beyond opening.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single clear sentence plus an extra note about it being free. It is front-loaded and contains no unnecessary words, though it could benefit from slightly more structure (e.g., listing return values explicitly).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two optional parameters, no output schema, and no annotations, the description adequately covers the purpose and return values. However, it does not address session duration, termination, or any rate limits, which are minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and both parameters have descriptions in the schema (intent and agent_id are optional). The tool description adds no additional meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'open a watchable agent session' and specifies the exact return values (session_id and public URL). This distinguishes it from sibling tools like datafood_query which are likely for different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'for live observation' implies the context of use, but there is no explicit guidance on when not to use it or comparison with alternatives. However, the description is clear enough for an agent to infer the appropriate scenario.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!