Foursquare
Server Details
Foursquare Places MCP (v3 Places API)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-foursquare
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 14 of 14 tools scored. Lowest: 3.3/5.
Most tools have distinct purposes with clear descriptions (e.g., ask_pipeworx for general questions, validate_claim for fact-checking, entity_profile for full profile). Some overlap exists between ask_pipeworx and other data tools, but descriptions guide appropriate usage.
Naming is a mix of verb_noun (ask_pipeworx, compare_entities, discover_tools, etc.) and noun_adjective (entity_profile, nearby_places, recent_changes). While most are snake_case, the pattern is not uniform, causing minor inconsistency.
14 tools is within the well-scoped range (3-15). Each tool serves a distinct purpose, though some (memory, feedback) could be considered auxiliary. The count is appropriate for the combined Foursquare and Pipeworx domains.
The tool set covers query, comparison, profiling, fact-checking, and identifier resolution for data, plus place search and details. Missing specific operations like direct SEC filing listing, but these are handled by ask_pipeworx. Overall coverage is strong.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Describes what it does (routes across sources, picks tool, fills arguments) but lacks details on mutability, rate limits, error handling, or side effects. Could be more transparent about limitations or security.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured: first sentence states purpose, then usage guideline, then source list, then examples. It is informative and concise, though could be slightly tighter without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-param tool with no output schema, the description covers purpose, usage, examples, and source scope adequately. It could mention potential response formats or timeouts, but overall complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has one param (question) with description 'Your question or request in natural language.' Tool description adds context: it routes across sources and provides examples (e.g., 'What is the US trade deficit with China?'), enriching semantics beyond the schema. Schema coverage is 100%, so baseline 3; description adds value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by automatically picking the right data source. It distinguishes from sibling tools (e.g., compare_entities, entity_profile) by indicating it's a meta-tool for when you don't want to figure out which specific tool to call. Examples reinforce the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use when a user asks...' and 'you don't want to figure out which Pipeworx pack/tool to call.' Implies not to use when you know the specific tool, but does not explicitly mention alternatives or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It discloses data sources (SEC EDGAR/XBRL, FAERS, etc.), what is returned for each type, and output format (paired data + citation URIs). It does not mention rate limits or error handling, but overall provides sufficient transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two main sentences covering purpose, trigger phrases, behavior, and output. It is front-loaded with the core action. A slight improvement could be separating type-specific details more clearly, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (two entity types, multiple metrics) and lack of output schema, the description provides essential information but lacks detail on the exact response structure. It mentions 'paired data' without specifying fields, which may leave some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, and the description adds meaningful context: examples for values, explanation of what each type retrieves, and constraints like min/max items. This goes beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (compare), resource (companies or drugs), and scope (2-5 entities). It contrasts with siblings by noting it replaces multiple sequential calls, making its purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit trigger phrases and use cases (e.g., 'compare X and Y', 'vs', tables/rankings). While it does not list exclusions or alternatives, the context is clear enough for an agent to know when to invoke this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that it returns the top-N most relevant tools with names and descriptions. It implies read-only behavior but doesn't detail ranking criteria or potential limits beyond the limit parameter. Still, it provides good behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is fairly concise but includes a list of domains and clear instructions. It is well-structured, front-loading the purpose and usage. Minor redundancy but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's role as a discovery tool, the description covers purpose, usage, parameter semantics, and output behavior (returns top-N tools). No output schema is needed; the description suffices. Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for two parameters. The description adds extra meaning: 'query' is described as a natural language description with examples, and 'limit' includes default and max values. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds tools by data or task description. It specifically lists verbs like browse, search, look up, and discover, and resource is tools. It distinguishes itself from sibling tools by being a meta-tool that returns tool names and descriptions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to call this FIRST when many tools are available to see the option set. It provides examples of when to use it (e.g., SEC filings, FDA drugs) and implies it's for discovery before using specific tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It discloses return types, supported entity type, and accepted value formats. While it lacks details on rate limits or authorization, it adequately describes the tool's behavior for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient paragraph of four sentences, front-loaded with the core purpose, and every sentence adds necessary information. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregates multiple sources) and lack of output schema, the description provides a comprehensive overview of what is returned (SEC filings, fundamentals, patents, news, LEI) with citation URIs. It could detail response structure but is sufficiently complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions, but the description adds value by explaining 'company' type limitations and the note about using resolve_entity for names. This augments the schema without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call' and lists specific data types (SEC filings, fundamentals, patents, news, LEI). It distinguishes from siblings like get_place (for places) and resolve_entity (name resolution), making the tool's purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides use cases (e.g., 'tell me about X') and alternatives: 'if you only have a name, use resolve_entity first'. This gives clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden of behavioral disclosure. It indicates the tool is destructive ('Delete') and mentions clearing sensitive data, which implies irreversible action. However, it does not explicitly state that deletion is permanent or if any authorization is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, with the action stated first. Every sentence serves a purpose: the first states the action, the second gives usage scenarios, the third links to related tools. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter, no output schema, and no annotations, the description is complete. It covers what the tool does, when to use it, and how it relates to siblings. No additional context is necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description adds no additional meaning beyond the schema—it simply reiterates 'by key'. Given full schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' and the resource 'previously stored memory by key'. It distinguishes itself from siblings like 'remember' and 'recall', which are mentioned explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: when context is stale, task is done, or to clear sensitive data. It also suggests pairing with 'remember' and 'recall', offering clear context for usage decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_placeBInspect
Full place details by fsq_place_id. Returns name, categories, address, lat/lon, social media, website, hours, rating, price, popularity.
| Name | Required | Description | Default |
|---|---|---|---|
| fsq_place_id | Yes | Foursquare place ID (returned by search_places) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as read-only nature, authentication requirements, rate limits, or error handling for invalid IDs. The description only states what data is returned, ignoring potential side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence listing returned items, which is efficient. However, it could be slightly improved by front-loading the action ('Get full place details...') for faster scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description bears the burden of explaining return values. It lists many fields (name, categories, address, etc.) but not in a structured format. More details on optional fields or response structure would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides a description for the single parameter, 'Foursquare place ID (returned by search_places)'. The description adds value by listing return fields but does not enhance parameter meaning beyond the schema. Schema description coverage is 100%, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full place details by fsq_place_id, listing specific fields returned (name, categories, address, etc.). It distinguishes itself from sibling tools like search_places (which searches) and nearby_places (which finds by location).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a fsq_place_id is already known, but lacks explicit guidance on when to use this versus alternatives or when not to use it. No mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nearby_placesAInspect
POIs near a lat/lon without a search term. Uses the new search endpoint with location bias and no query.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Results (1-50, default 20) | |
| latitude | Yes | Latitude | |
| radius_m | No | 1-100000 metres (default 500) | |
| longitude | Yes | Longitude | |
| categories | No | Comma-separated category IDs |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must carry the burden of disclosure. It mentions using 'the new search endpoint with location bias' but does not describe side effects, limitations (e.g., authentication, rate limits), or whether results are based on user data. The phrase 'no query' is useful but insufficient for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each adding distinct value: first defines the purpose, second gives implementation detail. No fluff, front-loaded, and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters and no output schema, the description covers the essential idea but omits return format, pagination, or error handling. It is minimally sufficient for a simple location tool, but leaves gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context ('without a search term') but does not elaborate on any specific parameter beyond what the schema already provides. The 'location bias' mention relates to coordinate usage but does not add to parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds Points Of Interest near a latitude/longitude without a search term. The verb 'POIs' and the resource 'lat/lon' are specific, and it distinguishes from sibling 'search_places' by emphasizing 'without a search term'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says it's for when you have coordinates and no query, using 'location bias'. This implies when to use and contrasts with search-term-based tools like 'search_places'. While it doesn't name alternatives, the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses behavioral traits: it is rate-limited (5 per identifier per day), free and doesn't count against quota, and that the team reads digests daily with roadmap impact. This goes beyond a simple 'submit feedback' description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a coherent block of text with no wasted sentences. However, it could be slightly more structured (e.g., bullet points) for faster scanning. Nonetheless, it is front-loaded with the main action and flows logically.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (feedback submission, no output schema), the description covers all necessary aspects: what to do, when to use it, how to format feedback, limitations, and downstream effects. No gaps are evident.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although schema coverage is 100%, the description adds meaningful context: it explains each enum value for 'type', gives length guidance for 'message' (1-2 sentences, 2000 chars max), and clarifies that 'context' is optional. This enriches the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Tell the Pipeworx team something is broken, missing, or needs to exist.' It identifies the specific resource (Pipeworx team) and action (giving feedback), and distinguishes itself from siblings by listing concrete scenarios (bug, feature, data_gap, praise).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly specifies when to use the tool for each feedback type (bug, feature, data_gap, praise) and provides negative guidance ('don't paste the end-user's prompt'). It also mentions rate limits and that it's free, offering clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It explains scoping and pairing with siblings, but doesn't specify behavior for missing keys or if the operation is read-only. Still, it is sufficiently transparent for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, with clear separation of the action, examples, scoping, and related tools. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, usage, and behavior adequately given the lack of output schema and annotations. Missing return format details, but the presence of sibling tools helps complete the picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has one optional parameter with a description, so baseline is 3. The description adds value by explaining the dual behavior when key is omitted (list all keys) and providing context on scoping.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (retrieve/list) and the resource (saved values/keys). It distinguishes from siblings remember and forget by specifying retrieval and listing behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool (to look up previously stored context) and implicitly when not to (use remember to save, forget to delete). It also provides concrete examples of use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that fans out to SEC, GDELT, USPTO in parallel, and describes the return structure (structured changes, count, citation URIs). This provides good behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two well-structured sentences with no wasted words. It efficiently communicates purpose, usage, and behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the three parameters and no output schema, the description adequately explains return values and behavior. It covers the multi-source fan-out and the meaning of the output, making it complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds value by explaining the 'since' parameter's accepted formats (ISO date or relative shorthand), providing examples for 'value' (ticker or CIK), and clarifying that 'type' only supports 'company'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose using a specific verb ('What's new') and resource ('a company'). It provides concrete example queries that distinguish it from sibling tools like entity_profile or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives multiple example user queries to indicate when to use the tool. While it does not explicitly state when not to use it, the context is clear enough for an AI agent to select it appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description explains persistence behavior (authenticated persistent, anonymous 24 hours) and scoping by identifier. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, front-loaded with purpose, then usage guidelines, then technical details. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-param tool with no output schema, description covers purpose, usage, parameter format, and behavioral notes (persistence, scoping). Complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters (100% coverage). Description adds naming conventions for keys and clarifies value can be any text. Adds value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool saves data for later reuse, with specific examples (resolved ticker, target address, user preference). Distinguishes from sibling tools recall and forget by name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use: 'when you discover something worth carrying forward'. Provides alternatives: pair with recall to retrieve, forget to delete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool returns IDs plus 'pipeworx:// citation URIs' and gives example outputs. However, it does not mention any error handling or data freshness, leaving minor gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the core purpose. Examples are included efficiently, and every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema), the description covers all essential aspects: purpose, input examples, output description (IDs and citation URIs), and usage context. It is fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The tool description mostly repeats the parameter descriptions already present in the schema, adding no significant new semantic information beyond examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: looking up canonical identifiers for companies or drugs. It specifies the ID systems (CIK, ticker, RxCUI, LEI) and provides concrete examples, distinguishing it from sibling tools like entity_profile or get_place.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool ('when a user mentions a name and you need the CIK...') and its role in the workflow ('Use this BEFORE calling other tools that need official identifiers'). This provides clear guidance on usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_placesBInspect
Search Foursquare places. Combine query (e.g., "coffee", "hardware store") with a location anchor — either near ("Brooklyn, NY") or latitude+longitude. Returns name, fsq_place_id, categories, address, distance, lat/lon, popularity.
| Name | Required | Description | Default |
|---|---|---|---|
| near | No | Place name anchor ("Brooklyn, NY", "Tokyo") | |
| sort | No | RELEVANCE (default) | DISTANCE | RATING | POPULARITY | |
| limit | No | Results (1-50, default 10) | |
| query | No | Free-text query | |
| latitude | No | Center latitude (pair with longitude) | |
| radius_m | No | 1-100000 metres (only with lat/lon) | |
| longitude | No | Center longitude | |
| categories | No | Comma-separated Foursquare category IDs (fsq_category_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden but only mentions the return fields. It does not disclose any behavioral traits such as side effects, rate limits, or authorization needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main purpose, and every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters and no output schema, the description covers core behavior but lacks details on return structure, error handling, or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the combination of query with location anchor and providing examples, but does not add significant semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Foursquare places and specifies the combination of a query with a location anchor. However, it does not explicitly differentiate from sibling tools like 'nearby_places' or 'get_place'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides guidance on combining query and location, but lacks explicit instructions on when to use this tool versus alternatives, and does not state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It discloses the return values (verdict, extracted form, actual value with citation, percent delta) and the underlying process (SEC EDGAR + XBRL). It also notes that the tool replaces multiple sequential calls, providing efficiency context. It does not mention potential errors, rate limits, or authentication requirements, but the information given is sufficient for understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, each serving a clear purpose: defining the tool, providing usage guidance, specifying scope and returns, and highlighting efficiency. It is front-loaded with key information, no unnecessary words. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description covers the essential aspects: what it does, when to use it, what it returns, and its domain limitations. It does not explain the underlying mechanics in depth, but that is not necessary for a tool with a clear single purpose. Slight gap in mentioning potential failure modes or edge cases, but overall sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (single parameter 'claim' described). The description adds example claims, which provide additional context but do not significantly enhance the meaning beyond the schema's description. Since schema coverage is high, a baseline of 3 is appropriate; the description adds marginal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fact-checks natural-language factual claims against authoritative sources, with specific verb 'Fact-check, verify, validate, or confirm/refute' and resource 'authoritative sources'. It includes examples that clarify the domain (company-financial claims). Though it does not explicitly contrast with sibling tools, its purpose is distinct and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool ('Use when an agent needs to check whether something a user said is true') and includes query examples. It also specifies the supported domain ('v1 supports company-financial claims...'), which implicitly tells the agent not to use it for claims outside that scope. However, it does not mention alternatives or explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!