Acled
Server Details
ACLED MCP — Armed Conflict Location & Event Data Project
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-acled
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 11 of 11 tools scored. Lowest: 3.2/5.
Many tools serve overlapping functions (e.g., ask_pipeworx can do what other tools do, compare_entities and entity_profile both provide entity data). The descriptions are detailed, which helps, but an agent could still be confused about which tool to use for a given task.
Naming conventions are mixed: some use verb_noun (search_events, resolve_entity), some are single verbs (forget, recall), and others use compound names (ask_pipeworx, pipeworx_feedback). There is no consistent pattern.
The server has 11 tools, which is a reasonable number, but many tools are not directly related to ACLED (the server name). The scope seems broader than the name implies, making the count feel slightly mismatched.
For the broader Pipeworx platform, the tool set covers querying, comparison, entity resolution, memory, and feedback. The two ACLED tools provide basic event search and aggregation. Minor gaps exist (e.g., no direct update/delete for external data), but overall the surface is fairly complete.
Available Tools
13 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: 'Pipeworx picks the right tool, fills the arguments, and returns the result.' However, it omits limitations like handling of ambiguous questions, failure modes, or data freshness. Since no annotations are provided, the description carries the full burden, and this level of disclosure is only moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured into three sentences and an example list, with the main action front-loaded. It is concise overall, though it includes some repetition between 'No need to browse tools' and 'just describe what you need'. Still, every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description sufficiently explains its meta-functionality and how to use it. It adequately conveys the concept of routing questions to underlying tools, though it could be more complete by mentioning potential failure scenarios or scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema describes the single parameter 'question' as 'Your question or request in natural language', achieving 100% coverage. The description adds significant context by explaining how the parameter is processed, providing examples, and stating that answers come from the best data source. This goes beyond the schema's bare definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It uses specific verbs and resources, and distinguishes itself from sibling tools like compare_entities or search_events by positioning itself as a general-purpose question-answer interface.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It gives concrete examples and contrasts with using individual tools. However, it does not explicitly state when not to use it or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It details the output (paired data + URIs) and constraints (2-5 entities), but lacks information on side effects, permissions, rate limits, or data freshness. Adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and every sentence adds essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose, parameters, and output format despite lacking an output schema. It omits error handling and format details, but for a 2-param tool with simple types, it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already explains parameters. The description adds value by listing example fields for each type (e.g., revenue, adverse-event count), providing context beyond the schema's enum and array descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 entities side by side, specifies two types (company/drug) with distinct data points, and highlights efficiency gains over sequential calls, distinguishing it from siblings like search_events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (comparing entities). It implies alternatives by noting it 'replaces 8–15 sequential agent calls,' but does not name specific alternative tools or explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. The description implies a read-only search operation but does not explicitly state safety or behavioral traits like side effects, rate limits, or permissions. It is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at two sentences, front-loading the action ('Search...') and then providing secondary guidance. No unnecessary words, earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and usage adequately for a simple search tool. Without an output schema, it might have explained the return format but it is not essential. Most information an agent needs is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes both parameters. The description adds minimal additional meaning beyond the schema, such as clarifying that 'query' is a 'natural language description'. The description does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'Pipeworx tool catalog', and what it returns ('most relevant tools with names and descriptions'). It distinguishes itself from sibling tools by being the only search tool on this server.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing clear when-to-use guidance. It does not mention when not to use it or list alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present, but description details the exact data returned (SEC filings, revenue, patents, news, LEI) and that it replaces 10–15 calls. Could mention that it may take longer due to multiple sources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise yet informative; uses bullet-like structure. Slightly dense but efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Even without output schema, the description enumerates all returned data types (SEC filings, XBRL financials, patents, news, LEI) and citation URIs, providing complete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds context: value format (ticker or CIK) and that names require resolve_entity first. No additional detail on optional parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a full profile of an entity across multiple data sources, distinguishing from siblings like resolve_entity and usa_recipient_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (for comprehensive company profile) and when to avoid (for federal contracts, directs to usa_recipient_profile).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
event_counts_by_countryBInspect
Aggregate event and fatality counts by country over a date range. Useful for cross-country comparison and time-bounded risk snapshots.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Underlying-event cap (1-5000, default 5000) | |
| region | No | Optional region restriction | |
| event_type | No | Optional event-type restriction | |
| event_date_to | No | YYYY-MM-DD inclusive | |
| event_date_from | No | YYYY-MM-DD inclusive |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the burden. It indicates aggregation (read-only implied) but does not explicitly state that no data is modified or any behavioral traits like security or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first sentence defines the action and scope, the second provides use context. Well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema and 5 optional parameters. The description does not specify what the response structure looks like (e.g., list of countries with counts), which is necessary to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% parameter description coverage. The description adds general context ('aggregate by country, over a date range') but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it aggregates event and fatality counts by country over a date range. However, it does not differentiate from sibling tools like search_events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a use case ('cross-country comparison, time-bounded risk snapshots') but does not offer any guidance on when to avoid using this tool or mention alternatives. Sibling search_events might serve a different purpose but no comparison is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states 'delete,' which is minimally informative. It does not mention idempotency, error handling for missing keys, or any side effects, leaving significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no unnecessary words. It is front-loaded and to the point, earning its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description is almost complete but lacks behavioral details such as return values or error conditions. It meets minimum viability but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the parameter described as 'Memory key to delete.' The description adds no additional meaning beyond what the schema already provides, so baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a stored memory by key,' providing a specific verb and resource. It distinguishes from sibling tools 'remember' (store) and 'recall' (retrieve), making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when deletion of a memorized value is needed, but it offers no explicit guidance on when to use this tool versus alternatives or any exclusion criteria. Basic implied usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rate limit (5 per identifier per day) and states it's free, but lacks details on submission confirmation, response, or asynchronous behavior. With no annotations, the description carries the burden but could be more transparent about what happens after sending.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise, with purpose stated first, followed by usage guidance and constraints. It is efficient and avoids redundancy, though a slightly more structured format (e.g., bullet points) could improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool, the description covers purpose, usage, rate limits, and content restrictions. No output schema exists, but the schema fully documents parameters. Missing details like confirmation behavior, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameter descriptions in the schema are thorough (enum options, context structure). The description adds no extra parameter guidance, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool sends feedback to the Pipeworx team, listing specific use cases: bug reports, feature requests, missing data, praise. It distinguishes from sibling tools like ask_pipeworx or discover_tools, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly describes when to use (for various feedback types) and provides guidance on content (describe in terms of Pipeworx tools/data, omit end-user prompt). Also mentions rate limits. While it doesn't discuss alternatives, the tool is unique and the guidelines are highly actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries the burden. It discloses both retrieval by key and listing all memories, and mentions persistence across sessions. No side effects are disclosed, but the tool is simple.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main action, no wasted words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional parameter and no output schema, the description covers both modes adequately and provides context on session persistence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description for the 'key' parameter. The description adds value by explaining that omitting the key lists all memories, clarifying the optional parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories, using specific verb 'retrieve' and resource 'memory'. It distinguishes from siblings like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this tool to retrieve context saved earlier, implying appropriate usage. It does not explicitly state when not to use, but the context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It discloses the parallel fan-out to SEC EDGAR, GDELT, USPTO; describes accepted input formats; and mentions the return type (structured changes, count, URIs). It does not mention destructive behavior or rate limits, but the disclosed behavior is clear and non-contradictory.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, front-loaded with purpose, then details fan-out, parameter usage, output, and use case. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (multi-source fan-out, relative dates) and no output schema, the description covers return format adequately (structured changes, count, URIs). It could be more specific about the structure of changes, but overall it provides sufficient context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds some context: for 'since' it provides example values like '7d' and typical use '30d' or '1m'; for 'value' it gives examples. However, most parameter meaning is already in schema descriptions, and the description's main added value is on tool behavior rather than parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'What's new about an entity since a given point in time.' It specifies the verb (get changes) and resource (entity changes), and distinguishes from siblings like entity_profile or search_events by detailing the multi-source fan-out behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly recommends use cases: 'Use for "brief me on what happened with X" or change-monitoring workflows.' It implies the tool is for aggregated recent activity, but does not explicitly state when not to use it or compare to alternatives like search_events.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the burden. It discloses memory persistence for authenticated vs anonymous users, which is sufficient for a simple store operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and focused, no unnecessary words. Information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but for a simple store, the description covers most needs. Could mention return value, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with examples. Description adds context about usage and storage duration, providing extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it stores a key-value pair in session memory, listing specific use cases like saving findings and preferences, which distinguishes it from siblings like recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explains when to use (to save intermediate findings, preferences, context) and mentions persistence differences, but does not explicitly state when not to use or directly compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It explains the tool performs a single call, returns IDs and URIs, and replaces multiple lookups. It does not mention any destructive actions, auth needs, or rate limits, but for a read-like lookup tool this is reasonable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences that are well-structured: first sentence states the main function, second adds input details, third describes output and benefit. Every sentence is useful with no redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description mentions the output format (IDs and pipeworx:// URIs). It covers all necessary aspects for a simple tool with 2 parameters and no nested objects, and even notes it replaces multiple calls.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with descriptions for both parameters. The description adds significant value by providing concrete examples for 'value' (ticker, CIK, name for company; brand/generic name for drug) and reinforces the 'type' enum. This goes beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it resolves an entity to canonical IDs across Pipeworx data sources in a single call, specifying support for company and drug types. It distinguishes itself from siblings like 'compare_entities' and 'ask_pipeworx' by being a one-call resolution tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool (when needing canonical IDs for company or drug entities) and gives examples of valid inputs. It does not explicitly state when not to use or list alternatives, but the sibling tools are different enough to imply usage boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_eventsAInspect
Search ACLED political-violence and protest events. Filter by country (use "|" to OR, e.g., "Ukraine|Russia"), region, event_type, actor, ISO country code, or date range. Returns date, lat/lon, actors, event type, fatalities, and source notes.
| Name | Required | Description | Default |
|---|---|---|---|
| iso | No | ISO 3166-1 numeric country code (alternative to country) | |
| year | No | Restrict to a calendar year | |
| actor | No | Match actor1 or actor2 (partial substring ok) | |
| limit | No | Records to return (1-5000, default 100; ACLED max-per-call is 5000) | |
| region | No | ACLED region (e.g., "Western Africa") | |
| country | No | Country name(s), pipe-separated for OR | |
| event_type | No | Battles | Protests | Riots | Explosions/Remote violence | Violence against civilians | Strategic developments | |
| event_date_to | No | YYYY-MM-DD inclusive | |
| fatalities_min | No | Minimum fatalities filter | |
| sub_event_type | No | Optional ACLED sub-event type | |
| event_date_from | No | YYYY-MM-DD inclusive |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It lists return fields but does not disclose pagination, rate limits, data freshness, or other behavioral details. Some info (max-per-call 5000) is in schema, not description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, filters, and returns. No redundant or extraneous words. Highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
11 optional parameters with no output schema; description lists return fields but not structure. Could mention return format (e.g., array of objects). Adequate for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all parameters described. Description repeats some filter options but adds no significant new semantics beyond the schema. Baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches ACLED political-violence and protest events, specifies filtering capabilities, and lists return fields. It distinguishes from sibling 'event_counts_by_country' which provides aggregate counts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for event retrieval with filters, but does not explicitly state when to use this tool over sibling 'event_counts_by_country' or exclusions. No when-not or alternative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains that the tool performs NL parsing, entity resolution, data lookup, and comparison, and returns a structured verdict with citation. It does not disclose side effects or potential limitations like rate limits or handling ambiguous claims, but it is fairly transparent about its process and output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with four sentences, each serving a clear purpose: stating the overall function, specifying the supported domain, listing return values, and explaining efficiency benefits. It front-loads the primary action without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (NL claim validation) and lack of output schema, the description adequately covers what the tool does, what it returns, and its limitations (v1, specific domain). It mentions possible verdicts including 'inconclusive' and provides citation format, but could be more complete by describing error scenarios or handling of ambiguous claims.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides full coverage for the 'claim' parameter with description and examples. The tool description adds domain-specific context (company-financial claims) and clarifies that it accepts natural-language factual claims, reinforcing usage beyond the schema. This adds value beyond the parameter definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool fact-checks natural-language claims against authoritative sources, specifies the domain (company-financial for public US companies), and lists the return values including verdict, structured form, actual value with citation, and delta. It also explains that it replaces multiple sequential agent calls, making its purpose very specific and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use the tool (for company-financial claims via SEC EDGAR + XBRL) and implies it is not suitable for other claims. It mentions it replaces 4-6 sequential agent calls, providing context on efficiency. However, it does not explicitly compare to sibling tools like 'ask_pipeworx' or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!