Gdelt
Server Details
GDELT MCP — Global Database of Events, Language, and Tone (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gdelt
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 13 of 13 tools scored.
Tools have largely distinct purposes (entity resolution, comparison, profiling, news search, memory management, feedback). However, ask_pipeworx acts as a meta-tool that could overlap with specific tools, causing minor ambiguity.
Names use lowercase underscores but mix verb_noun (compare_entities, search_articles), noun_noun (entity_profile, timeline_tone), and single verbs (remember, forget). The pattern is readable but not fully consistent.
13 tools is within the ideal 3-15 range, covering entity operations, news analysis, and memory. However, some tools (discover_tools, ask_pipeworx) are meta-tools that might be redundant if the agent uses them directly.
The tool set covers the full lifecycle for the domain: entity resolution, detailed profiles, comparisons, recent changes, news search, sentiment/volume timelines, plus memory and feedback. No obvious dead ends or missing operations.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full responsibility for behavioral disclosure. It explains that the tool 'picks the right tool, fills the arguments, and returns the result,' but omits details on limitations, error handling, authentication requirements, or potential side effects. This is adequate but not thorough for a meta-orchestration tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise and well-structured: two core sentences immediately establish the tool's function, followed by a list of diverse examples. Every sentence serves a distinct purpose without redundancy, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (acting as an intelligent router), the description is somewhat lacking. It does not explain what happens if the question cannot be answered, if the best source is unavailable, or how responses are formatted. With no output schema, more context on return behavior would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single 'question' parameter, which already includes a natural language description. The tool's description adds value by providing illustrative examples ('What is the US trade deficit with China?'), clarifying the breadth and format of acceptable questions beyond the schema's generic phrasing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering questions in plain English by automatically selecting the right data source and filling arguments. It effectively distinguishes itself from sibling tools like search_articles or resolve_entity by positioning itself as a meta-tool that eliminates the need to browse or learn individual tool schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises users to 'just describe what you need' and provides concrete examples, implying usage when uncertain about which specific tool to call. However, it lacks explicit guidance on when not to use this tool (e.g., for batch queries or when precise control over data source is needed), which would strengthen the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description takes full responsibility for transparency. It discloses the data sources (SEC EDGAR, FDA) and the output (paired data + URIs). It does not cover error handling or auth requirements, but for a data retrieval tool the level of detail is reasonable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 sentences) and front-loaded with the main purpose. Each sentence adds value: purpose, type-specific fields, output URIs, and efficiency. Slight redundancy in the first sentence could be tightened, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers expected return data (paired data, URIs) and scope (2–5 entities). It does not detail error conditions or performance characteristics, but for a comparison tool, the information provided is sufficient for an agent to understand what it does and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the enumeration ('company' endpoints vs 'drug' endpoints) and by detailing how to format the 'values' array for each type (tickers/CIKs vs drug names), which goes beyond the schema's generic array description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2–5 entities side by side, differentiates by entity type ('company' or 'drug'), and lists the specific data fields returned for each type. It also highlights the efficiency gain of replacing multiple sequential calls, making the purpose very specific and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning it replaces 8–15 sequential calls, suggesting it is intended for multi-entity comparisons. However, it does not explicitly state when not to use it or name alternative tools for single-entity lookups, which would strengthen guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description bears full responsibility for behavioral disclosure. It only states that the tool returns 'most relevant tools' but provides no details on how relevance is determined, sorting, rate limits, or whether the search is semantic or keyword-based. This is insufficient for an agent to fully understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, with the first sentence stating the core function and the second giving usage guidance. Every sentence adds value, and no words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool lacks an output schema, so the description should explain return values. It mentions returning 'names and descriptions' but omits whether other metadata (e.g., tool signatures, IDs) is included. Given its role as a discovery tool, this is a notable gap, but the description is still adequate for basic understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema: it rephrases the query parameter as 'search by describing what you need' but does not elaborate on the limit parameter or provide any additional context that the schema does not already convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool searches the Pipeworx tool catalog by natural language description and returns relevant tools with names and descriptions. This purpose is distinct from sibling tools like ask_pipeworx (chat) or search_articles (article search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to call this tool FIRST when 500+ tools are available, providing clear context for when it should be used. However, it does not mention when not to use it or suggest alternatives, so it stops short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it bundles multiple data sources in one call, returns citation URIs, and excludes federal contracts due to performance. Without annotations, this provides solid behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste: front-loaded purpose, then bullet-like data listing, then usage caveat. Perfectly sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description specifies exact return contents (data types and URI format) and addresses sibling alternative, making it fully actionable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions cover both parameters (100% coverage), and description adds crucial context: enum limitation to 'company' and value format (ticker/CIK) with name resolution fallback.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'returns full profile' and lists concrete data sources (SEC filings, XBRL financials, patents, news, LEI), clearly distinguishing from siblings like usa_recipient_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (for comprehensive entity profile) and when not to (federal contracts → usa_recipient_profile), and instructs to resolve names first if needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description alone must disclose behavioral traits. It states it deletes a memory, which clearly indicates a destructive mutation, but it lacks details about side effects, error handling (e.g., if key does not exist), or reversibility. The minimal statement is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is front-loaded with the essential action and resource, making it easily scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter with full schema coverage, no output schema, no annotations), the description adequately conveys the core purpose. It could mention what happens on failure or if the key is invalid, but for basic understanding it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description for 'key' is 'Memory key to delete'. The tool description adds context by stating it deletes a 'stored memory by key', implying that the key references an existing stored memory. This adds slight meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key' uses a specific verb ('Delete') and identifies the resource ('stored memory by key'). It clearly distinguishes the tool from siblings like 'recall' (retrieve) and 'remember' (store).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, constraints, or when it should not be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses a rate limit (5 messages per identifier per day) and a content constraint (do not include end-user prompt). It does not describe response format or permissions, but the disclosed details are valuable for agent behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with purpose, followed by use cases and behavioral notes. Every sentence adds value without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, no output schema, no annotations), the description covers purpose, usage, rate limit, and content restrictions. It is sufficiently complete for an agent to invoke the tool correctly, though it could optionally mention the expected response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional parameter meaning beyond the schema; it provides usage guidance (describe what you tried) but that pertains to message content rather than parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Send feedback to the Pipeworx team' and enumerates specific use cases (bug reports, feature requests, missing data, praise). It distinguishes this tool from sibling tools, as none of the siblings serve a feedback purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use (for feedback types) and what not to include (end-user prompt verbatim). It does not explicitly mention alternatives, but the context is clear and the tool is unique for feedback.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It discloses retrieval and listing behavior but does not specify output format, error handling (e.g., missing key), or side effects (none, but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, front-loaded with main action. Each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool (one optional param, no output schema), description covers purpose and usage. Could mention return format or behavior when key not found, but adequate for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter. Description essentially repeats schema: retrieve by key or omit to list. Adds no new meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'retrieve' and resource 'memory' with explicit behavior: retrieve by key or list all. Distinguishes from siblings 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States when to use: 'retrieve context you saved earlier in the session or in previous sessions'. Does not explicitly exclude other scenarios, but context implies it's for recall, not storage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses parallel fan-out to multiple sources, accepted input formats (ISO date and relative), and return structure (structured changes, total_changes count, URIs). No annotations, so description carries full burden; it does well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded purpose, then behavior, input, output, use cases. No wasted words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description still covers return format. Inputs fully covered by schema and description. Use cases and behavior complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value: examples for since ('Use 30d or 1m for typical monitoring'), interpretation of value (ticker or CIK), and reiterates only company type. Baseline at 3, exceeds with extra guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'What's new about an entity since a given point in time' and explains the fan-out behavior across SEC, GDELT, USDTO. It differentiates from siblings like entity_profile (profile) and compare_entities (comparison).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use for "brief me on what happened with X" or change-monitoring workflows,' providing clear context. It does not list alternatives but the purpose is distinct from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility. It discloses important behavioral details: authenticated users get persistent memory, anonymous sessions last 24 hours. This addresses durability and session behavior, which is useful for an AI agent. No mention of side effects or limits, but sufficient for a simple store operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences, front-loaded with the core action, and no redundant information. Every sentence adds value, making it easy for an AI agent to quickly grasp the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description covers the essential aspects: what the tool does, when to use it, and persistence behavior. For a simple write operation, this is sufficiently complete. It doesn't specify the return value, but that is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both key (memory key pattern) and value (any text). The description adds context on what to store but does not significantly extend the semantic meaning beyond the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (store a key-value pair) and the resource (session memory). It distinguishes from siblings like 'recall' and 'forget' by specifying the storage function, and provides context on persistence differences between authenticated and anonymous users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly recommends using this tool for saving intermediate findings, preferences, or context across calls. It does not explicitly mention when not to use it, but the sibling tools imply retrieval and deletion, providing adequate guidance for appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It states the tool returns IDs and resource URIs, implying a read-like operation, but does not disclose idempotency, side effects, or error handling. Acceptable but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences, each serving a purpose: purpose, details, benefit. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 2 required parameters and no output schema, the description covers the return format (IDs and URIs) and the value range. It is adequate for the tool's simplicity, though an output schema would further reduce ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, and the description adds value by providing specific examples for 'value' (e.g., 'AAPL', 'ozempic') and clarifying the enum constraints. This goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Resolve an entity to canonical IDs' across specific data sources, with explicit support for two entity types (company, drug) and the transformation steps for each. It also highlights the efficiency gain over multiple calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies the use case: resolving entities to canonical IDs. It mentions that it replaces 2–3 lookup calls, implying efficiency. However, it does not explicitly state when not to use this tool or compare it with siblings like compare_entities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesAInspect
Search global news articles indexed by GDELT 2.0. Returns recent matches with URL, title, domain, source country, language, tone (-100..+100), and image. Use the query language: plain words AND together, "quotes" for phrases, parens for OR groups, "-word" to exclude, "sourcecountry:US" / "sourcelang:eng" / "theme:TERROR" / "near:Paris~50" for advanced filters.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | HybridRel (default) | DateDesc | DateAsc | ToneDesc | ToneAsc | |
| query | Yes | GDELT query string | |
| timespan | No | Lookback window: e.g., "24h", "7d", "1m", "custom" (paired with startdatetime/enddatetime). Default 7d. | |
| enddatetime | No | YYYYMMDDHHMMSS (UTC) — only with timespan=custom | |
| max_records | No | Results to return (1-250, default 25) | |
| startdatetime | No | YYYYMMDDHHMMSS (UTC) — only with timespan=custom |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. Describes data source and behavior (returns recent matches with specified fields). No contradictions; assumes read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph with all key points front-loaded. Could be slightly more structured with bullet points, but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description lists return fields. Covers all parameters via schema and query language examples. For a search tool, it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant value by explaining query language, advanced filters (sourcecountry, theme, near), and default values for sort and timespan.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches global news articles from GDELT 2.0 and lists return fields (URL, title, domain, etc.). It differentiates from sibling tools like compare_entities or timeline_tone by focusing on article search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit query language examples and advanced filters. Warns about default sort and timespan. Does not explicitly state when not to use, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
timeline_toneAInspect
Day-by-day average tone (-100 very negative .. +100 very positive) for a GDELT query over time. Returns datapoints with timestamp and tone value. Useful for tracking sentiment shifts around a topic, person, or place.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | GDELT query string | |
| timespan | No | Lookback window (default "1m" — month) | |
| enddatetime | No | YYYYMMDDHHMMSS — only with timespan=custom | |
| startdatetime | No | YYYYMMDDHHMMSS — only with timespan=custom |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses return format (datapoints with timestamp and tone) and tone range, but omits details like ordering, handling of empty results, or rate limits. Basic transparency is present but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no redundancy. It is front-loaded with the core function and efficiently conveys the tool's purpose and return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 4 parameters, the description is fairly complete, covering the return format and use case. It lacks explicit mention of ordering or limitations, but is sufficient for a simple tool. Sibling tools are not referenced.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond what the schema already provides, as it only mentions the query and tone range without elaborating on parameters like timespan. No enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool computes day-by-day average tone for a GDELT query, specifying the tone range (-100 to +100). It distinguishes from siblings like timeline_volume by mentioning 'sentiment shifts,' making its purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for tracking sentiment shifts but does not explicitly compare to alternatives like timeline_volume or search_articles. It provides weak guidance on when to use vs. not use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
timeline_volumeAInspect
Day-by-day article volume as % of total news for a GDELT query. Returns datapoints with timestamp and intensity. Useful for spotting topic spikes and comparing news attention across periods.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | GDELT query string | |
| timespan | No | Lookback window (default "1m") | |
| enddatetime | No | YYYYMMDDHHMMSS — only with timespan=custom | |
| startdatetime | No | YYYYMMDDHHMMSS — only with timespan=custom |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the output format ('datapoints with timestamp and intensity') and metric (percentage of total news), providing adequate transparency for a read-only query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters and no output schema, the description covers purpose and output format. It is fairly complete, though could mention pagination or limits if applicable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context about the returned data (percentage, timestamp, intensity) but does not enhance parameter explanations beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'day-by-day article volume as % of total news' for a GDELT query, with specific use cases like spotting topic spikes and comparing attention, which differentiates it from sibling tools like timeline_tone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context ('useful for spotting topic spikes and comparing news attention'), but does not specify when not to use or list alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the data sources (SEC EDGAR + XBRL), return types (verdict, structured form, citation, delta), and scope limitation (v1). It does not mention auth or rate limits, but the tool is read-only and these are not critical.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the primary purpose, and wastes no words. It efficiently conveys scope, functionality, and benefits.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fact-checking with multiple outputs) and single parameter, the description is fairly complete. It explains return values and scope. Could mention error handling for unsupported claims, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context about supported claim types and examples but does not significantly enhance parameter semantics beyond the schema's own description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-check a natural-language claim against authoritative sources, specifically company-financial claims. It lists the verdict types and structured output, and distinguishes from siblings by noting it replaces 4-6 sequential agent calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use (fact-checking claims, especially financial) and mentions its scope limitation (v1 supports only company-financial claims). It does not explicitly state when not to use, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!