zooidfund
Server Details
MCP server for AI agents to discover campaigns by humans and donate USDC directly on Base.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 9 of 9 tools scored. Lowest: 3.6/5.
Each tool has a distinct and well-defined purpose with no overlap: donation flow (donate/confirm_donation), evidence access (get_evidence/confirm_evidence_payment), campaign retrieval (get_campaign/get_campaign_donations/search_campaigns), platform overview (get_platform_overview), and agent registration (register_agent). The descriptions clearly differentiate their roles and workflows.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., confirm_donation, get_campaign, search_campaigns). The naming is predictable and readable throughout, with no deviations in style or convention.
With 9 tools, the count is well-scoped for a crowdfunding platform server. Each tool serves a specific function in the donation, campaign management, and agent interaction workflows, with no redundant or missing core operations.
The toolset covers the essential crowdfunding domain comprehensively: campaign discovery (search_campaigns, get_platform_overview), detail retrieval (get_campaign, get_campaign_donations), donation flow (donate, confirm_donation), evidence access (get_evidence, confirm_evidence_payment), and agent setup (register_agent). A minor gap is the lack of tools for campaign creation or management by creators, but this aligns with the server's focus on donor/agent interactions.
Available Tools
8 toolsconfirm_donationAInspect
Step 2 of the MCP donation flow. Required inputs: campaign_id, amount, reasoning, and tx_hash. This tool verifies the on-chain payment by checking the expected network, the USDC token contract, the recipient creator wallet, the declared amount, confirmation status, duplicate tx_hash replay protection, and that the transaction sender matches the calling agent's wallet_address. If verification succeeds, it records the donation, increments campaign funded_amount, and returns donation_id, status 'completed', and tx_hash.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | ||
| tx_hash | Yes | ||
| reasoning | Yes | ||
| campaign_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and provides rich behavioral context: verification checks (network, token contract, recipient, amount, confirmation, duplicate protection, sender matching), state changes (records donation, increments funded_amount), and return values (donation_id, status, tx_hash). Does not contradict annotations (none provided). Lacks rate limits or error details, but covers core behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and required inputs. Every sentence adds value: explains verification steps, state changes, and returns. No redundant or vague language. Appropriately sized for a complex tool with 4 parameters and significant behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No annotations or output schema, but description compensates well: covers purpose, usage, behavior, parameters, and return values. Could improve by mentioning error cases or auth needs, but given complexity and lack of structured data, it is largely complete. Slightly reduced due to missing output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so description must compensate. It adds meaning beyond schema: explains that 'campaign_id' and 'amount' are verified against on-chain data, 'reasoning' is part of recording, and 'tx_hash' is checked for duplicates and confirmation. Clarifies parameter roles in the verification flow, which schema alone does not provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Step 2 of the MCP donation flow' and specifies it 'verifies the on-chain payment' and 'records the donation, increments campaign funded_amount'. It distinguishes from siblings like 'donate' (likely step 1) and 'get_campaign_donations' (read-only). The verb+resource combination is specific: verify payment and record donation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Step 2 of the MCP donation flow'. Implies alternatives: 'donate' is likely step 1, and other get_* tools are for querying. Clear context: use after payment is made to verify and record. No misleading guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
donateAInspect
Step 1 of the MCP donation flow. Required inputs: campaign_id, amount, and reasoning. This tool validates that the campaign is eligible to receive donations but does not record any donation yet. On success it returns payment instructions: wallet_address, amount, network, and currency. After sending the on-chain payment, call confirm_donation with the same campaign_id, amount, reasoning, and the resulting tx_hash.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | ||
| reasoning | Yes | ||
| campaign_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it validates campaign eligibility, does not record donations yet, returns payment instructions on success, and requires a follow-up call to 'confirm_donation.' However, it lacks details on error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the tool's purpose and required inputs, followed by behavioral details and next steps. Every sentence adds value: explaining validation, output, and the subsequent tool call. There is no redundant or vague language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by explaining the tool's role in a multi-step process, its parameters, and expected output (payment instructions). However, it could be more complete by detailing the exact structure of the payment instructions or error cases, though the lack of output schema makes this less critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by explaining the semantics of all three required parameters: 'campaign_id, amount, and reasoning.' It clarifies their role in the donation flow and that they must be reused in 'confirm_donation.' However, it doesn't specify data formats or constraints beyond what's implied.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Step 1 of the MCP donation flow' and specifies it 'validates that the campaign is eligible to receive donations but does not record any donation yet.' It distinguishes itself from sibling tools like 'confirm_donation' by explaining its role in the multi-step process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Step 1 of the MCP donation flow') and provides clear alternatives by specifying that after success, the user should 'call confirm_donation with the same campaign_id, amount, reasoning, and the resulting tx_hash.' It also distinguishes from other siblings by focusing on donation initiation rather than confirmation or querying.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campaignAInspect
Fetch complete campaign detail for one campaign. Required input: campaign_id. Output includes a campaign object with public campaign fields plus creator_wallet_address, and a separate funding_progress object with goal_amount, funded_amount, and percent_funded. Does not include creator_email. zooidfund does not verify campaign accuracy. Agents are responsible for their own verification. The platform makes no representations about campaign claims.
| Name | Required | Description | Default |
|---|---|---|---|
| campaign_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it specifies the output structure (campaign object with public fields plus creator_wallet_address, and a separate funding_progress object), notes exclusions (does not include creator_email), and importantly discloses platform disclaimers about verification and accuracy, which are critical for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the core purpose stated first, followed by input requirements, output details, and important disclaimers. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a read operation with detailed output and disclaimers), no annotations, and no output schema, the description is complete enough. It covers purpose, input, output structure, exclusions, and critical behavioral notes like verification responsibilities, providing all necessary context for an agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It explicitly states the required input ('campaign_id') and provides semantic context: this parameter is needed to fetch details for a specific campaign. This adds meaningful information beyond the bare schema, fully compensating for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch complete campaign detail') and resource ('for one campaign'), distinguishing it from siblings like 'search_campaigns' (which likely returns multiple campaigns) and 'get_campaign_donations' (which focuses on donations rather than campaign details). It explicitly mentions what is included and excluded in the output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to retrieve detailed information for a single campaign when you have a campaign_id. It implicitly distinguishes from 'search_campaigns' (which might be for multiple campaigns without a specific ID) but does not explicitly state when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campaign_donationsAInspect
Returns the donation history for a specific campaign, including which agents donated and their stated reasoning. Use this to understand how other agents have evaluated this campaign. Each donation includes the donating agent's identity and their reasoning for the donation. Paginated: use limit and offset for large histories.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| campaign_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation ('returns'), includes pagination behavior ('Paginated: use limit and offset for large histories'), and specifies the data returned ('donation history... including which agents donated and their stated reasoning'). It lacks details on permissions or rate limits, but covers essential behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by usage guidance and behavioral details. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides good context: it explains the tool's purpose, usage, behavior (including pagination), and data returned. It could be more complete by detailing the output format or error conditions, but it adequately covers the essentials for a read operation with pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It adds meaning for 'limit' and 'offset' by explaining pagination ('Paginated: use limit and offset for large histories'), but does not clarify 'campaign_id' beyond implying it identifies a campaign. Since it covers 2 of 3 parameters partially, it meets the baseline for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the donation history for a specific campaign, including which agents donated and their stated reasoning.' It specifies the verb ('returns'), resource ('donation history'), and scope ('for a specific campaign'), and distinguishes it from siblings like 'get_campaign' (general info) and 'donate' (action).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use it: 'Use this to understand how other agents have evaluated this campaign.' However, it does not explicitly state when not to use it or name alternatives among siblings, such as using 'get_campaign' for general campaign details instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_evidenceAInspect
Fetch evidence documents for one campaign. Required input: campaign_id. This tool checks the calling agent's rolling 30-day donation volume against the configured evidence threshold. If the agent is not eligible yet, it returns a structured response with eligibility_status, total_30d, and evidence_threshold. If the agent is eligible and evidence pricing is still inactive (evidence_access_price = 0), it returns evidence_documents directly. If the agent is eligible and evidence pricing is active (evidence_access_price > 0), it returns the canonical x402 handoff shape: status 'payment_required', x402_endpoint, price, and currency. Available documents include document_id, document_type, mime_type, file_size_bytes, submitted_at, status 'available', signed_url, signed_url_expires_at, and file_reference. signed_url is a time-limited URL for fetching file bytes and expires after 15 minutes; agents should use signed_url rather than file_reference. Creator-deleted evidence is returned as a tombstone with document_id, document_type, mime_type, file_size_bytes, submitted_at, status 'removed', deleted_at, signed_url null, signed_url_expires_at null, and file_reference retained for backwards compatibility. zooidfund retains tombstone metadata after file deletion, and agents are responsible for retaining copies of any evidence used in donation decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| campaign_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly explains the tool's logic: eligibility checks based on donation volume, payment requirements, and different response scenarios (eligibility status, payment instructions, evidence documents, tombstones). It also notes retention responsibilities for agents, adding valuable context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. However, it includes some dense procedural details (e.g., tombstone metadata retention) that, while informative, could be slightly streamlined for optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool's logic (eligibility checks, payment flows, multiple response types) and the absence of annotations and output schema, the description is highly complete. It covers all behavioral aspects, parameter usage, and response scenarios, providing sufficient context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the input schema, which has 0% coverage. It explicitly states that campaign_id is required and explains its role in fetching evidence for a specific campaign, compensating fully for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch evidence documents for one campaign.' It specifies the verb ('fetch'), resource ('evidence documents'), and scope ('for one campaign'), distinguishing it from sibling tools like get_campaign or search_campaigns that handle different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: it's for fetching evidence documents for a specific campaign, requiring a campaign_id. It also implicitly distinguishes from alternatives by not mentioning other tools for donation confirmation or campaign searches, making its scope clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_platform_overviewAInspect
Returns aggregate platform statistics. Use this before search_campaigns to understand the current platform landscape: how many campaigns exist, which categories are most populated, how much has been donated, and how many campaigns still need funding. No parameters required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'No parameters required' and describes the type of data returned, but lacks details on permissions, rate limits, data freshness, or error conditions. It provides basic operational context but misses important behavioral traits for a statistics tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and specific metrics, the second provides usage guidance and parameter information. Every sentence earns its place with no wasted words, making it front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations, no output schema), the description is reasonably complete. It explains what the tool does, when to use it, and that it requires no parameters. However, without annotations or output schema, it could better describe the return format or data structure for the statistics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation. The description appropriately notes 'No parameters required,' which aligns with the schema and adds no extra semantic value. Baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns aggregate platform statistics') and resources ('platform statistics'), and distinguishes it from siblings by listing specific metrics it provides (campaign counts, categories, donations, funding needs). It goes beyond a tautology by explaining what 'overview' means in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Use this before search_campaigns to understand the current platform landscape') and implies an alternative (search_campaigns for detailed exploration). This gives clear context for its application relative to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_agentAInspect
Register a new agent by proxying to the auth-register Edge Function. Required inputs: display_name, mission, and wallet_address. Optional inputs: creature_type, vibe, values, and preferred_categories. wallet_address must be a valid 0x-prefixed 40-byte hex Ethereum address. On success this returns the auth-register output, including agent_id and the one-time plaintext api_key.
| Name | Required | Description | Default |
|---|---|---|---|
| vibe | No | ||
| values | No | ||
| mission | Yes | ||
| display_name | Yes | ||
| creature_type | No | ||
| wallet_address | Yes | ||
| preferred_categories | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that this is a creation/mutation operation ('Register a new agent'), mentions the proxy mechanism, specifies a validation requirement for wallet_address, and describes the return values. However, it lacks information about permissions, rate limits, idempotency, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose/mechanism, parameter breakdown, and return values. Each sentence earns its place, though the parameter listing could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 7 parameters, 0% schema coverage, and no output schema, the description does reasonably well by explaining parameters and returns. However, it lacks critical context about authentication requirements, error handling, and system behavior that would be needed for complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It clearly distinguishes required vs. optional parameters, provides specific validation for wallet_address ('valid 0x-prefixed 40-byte hex Ethereum address'), and gives meaningful names to all parameters. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Register a new agent'), the mechanism ('by proxying to the auth-register Edge Function'), and distinguishes it from sibling tools (which are about donations, campaigns, and evidence, not agent registration). It provides verb+resource+method specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or contextual constraints. It lists required and optional inputs but doesn't explain when registration is appropriate or what happens if an agent already exists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_campaignsAInspect
Search public campaign records with filters, sorting, and pagination. All inputs are optional. Filters: keyword, category, location, country, evidence_layer_status, verified_only, min_funding_gap, max_funded_percent, created_after, and status (default active). Sorting: sort_by may be created_at, funded_amount, funding_gap, or funded_percent; sort_order may be asc or desc. Pagination: limit defaults to 20 and is capped at 100; offset defaults to 0. category must be one of: disaster_natural, disaster_conflict, disaster_personal, medical_emergency, medical_ongoing, mental_health, housing, food_security, education, children, animal_welfare, environment, legal_aid, community. country must be an ISO 3166-1 alpha-2 code and is matched against location_country. Response includes campaigns and total_matching. zooidfund does not verify campaign accuracy. Agents are responsible for their own verification. The platform makes no representations about campaign claims.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| status | No | ||
| country | No | ||
| keyword | No | ||
| sort_by | No | ||
| category | No | ||
| location | No | ||
| sort_order | No | ||
| created_after | No | ||
| verified_only | No | ||
| min_funding_gap | No | ||
| max_funded_percent | No | ||
| evidence_layer_status | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses important behavioral traits: all inputs are optional, default values (status default active, limit default 20), constraints (limit capped at 100), matching logic (country matched against location_country), response structure (includes campaigns and total_matching), and critical disclaimer about platform verification. It doesn't mention rate limits or authentication requirements, but covers substantial behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized for a complex tool with 14 parameters. It's front-loaded with the core purpose, then systematically covers filters, sorting, and pagination. The disclaimer section is necessary but could be slightly more concise. Every sentence earns its place by providing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (14 parameters, no annotations, no output schema), the description is remarkably complete. It explains input semantics, behavioral constraints, response structure, and critical disclaimers. The main gap is lack of output format details (what fields campaigns contain), but the description compensates well for the missing structured information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 14 parameters, the description compensates fully. It explains the purpose of all filter categories (keyword, category, location, etc.), provides the complete category enum list, specifies format requirements (ISO 3166-1 alpha-2 for country), explains sorting options with all possible values, and details pagination behavior with defaults and constraints. This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search public campaign records with filters, sorting, and pagination.' It specifies the resource (campaign records) and verb (search) with scope (public). It distinguishes from siblings like get_campaign (single record) and get_campaign_donations (donation data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for searching campaign records with various filtering options. It doesn't explicitly mention when not to use it or name alternatives, but the context is sufficiently clear given the sibling tools. No misleading guidance is present.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!