zero-core-budget
Server Details
Agent spending mgmt, budget tracking, ROI. Zero Core Budget.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- meltingpixelsai/harvey-budget
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Each tool serves a distinct purpose: pre-spend approval, post-spend recording, analytics, server health, tool discovery, and agent registration. No overlaps.
Most tools follow a consistent verb_noun snake_case pattern (check_spend, get_spending_report, report_spend, register_agent). 'health' deviates slightly but is a common single-word command. Minor inconsistency.
Six tools cover the core workflow of budget management (register, check, report, analyze) plus utilities (health, list_tools). Well-scoped for the domain.
Covers the essential lifecycle: agent setup, pre-spend validation, spend recording, and analytics. Missing explicit category limit management, but registration supports upsert, partially addressing updates.
Available Tools
6 toolscheck_spendAInspect
Pre-spend budget approval check. Verifies daily/weekly/category limits, estimates ROI from historical data, and suggests cheaper alternatives. Call this BEFORE paying for any service.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Your agent identifier (must be registered first) | |
| category | Yes | Spending category | |
| amount_usd | Yes | Amount you're about to spend in USD | |
| service_id | Yes | Service you want to pay for (e.g. 'harvey-tools/scrape_url') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description clarifies the tool is a pre-spend check (non-destructive) and outlines actions (verify, estimate, suggest). It does not specify side effects or auth requirements beyond the schema, but the guidance is sufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences with the core purpose front-loaded. Every sentence adds value without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's actions but does not specify the output format, such as whether it returns a boolean approval, estimated ROI, or suggested alternatives. Given no output schema, this gap reduces completeness for an agent deciding how to use the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides full descriptions for all 4 parameters (100% coverage). The description adds no new parameter-specific details beyond the schema, such as format expectations for 'agent_id' or 'service_id'. Baseline score of 3 is appropriate given schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as a pre-spend budget approval check, specifying that it verifies limits, estimates ROI, and suggests cheaper alternatives. This distinguishes it from siblings like 'get_spending_report' (which likely does not perform approval checks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs 'Call this BEFORE paying for any service,' providing clear when-to-use guidance. It does not name alternatives but the context and sibling names imply the appropriate usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_spending_reportAInspect
Detailed spending analytics report. Returns total spent, category breakdown, top services with ROI, and optimization recommendations. Use to understand spending patterns and find inefficiencies.
| Name | Required | Description | Default |
|---|---|---|---|
| period | Yes | Report period | |
| agent_id | Yes | Your agent identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are present, the description is the sole source. It describes the output but does not disclose any behavioral traits such as side effects, permissions, or rate limits. This is insufficient for a report tool that might have data freshness or caching considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two front-loaded sentences that immediately convey the tool's purpose and output. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description adequately covers what the report contains. The parameters are simple and well-documented. However, it could mention if the report is real-time or cached, which is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so both parameters are documented. The description does not add additional meaning beyond the schema's enum and descriptions. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a detailed spending analytics report with specific components (total spent, category breakdown, etc.), and it distinguishes from siblings like 'check_spend' by focusing on in-depth analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states the use case ('understand spending patterns and find inefficiencies') but does not provide guidance on when alternative tools like 'check_spend' might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthAInspect
Check Harvey Budget server status, uptime, and payment network configuration.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the burden. It states what is checked (status, uptime, config) but does not mention side effects (likely none), authentication requirements, rate limits, or error behavior. Adequate for a basic health check but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is front-loaded and efficiently conveys the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain return values. It only mentions what is checked but not the format of the response (e.g., JSON with status flags). Completely adequate for a simple health endpoint but could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, making schema description coverage 100%. According to the guidelines, with 0 parameters, the baseline is 4. The description adds no parameter info because there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking server status, uptime, and payment network configuration. It uses a specific verb ('Check') and resource, and distinguishes itself from siblings like 'check_spend' which likely deals with spending.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by the name 'health' and the description indicating server monitoring. However, no explicit guidance on when to use this tool versus alternatives (e.g., check_spend) or any exclusions (e.g., 'use this for overall health, not for specific spending queries').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_toolsAInspect
List all available Harvey Budget tools with pricing and input requirements. Use this for discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It mentions listing pricing and input requirements, but lacks details on auth, rate limits, or side effects. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, front-loaded with the main action, no wasted words. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple purpose, description covers basic what and when, but lacks details on output format or any limitations. Adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so baseline is 4. Description does not need to add param info, and it doesn't, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly specifies verb 'List' and resource 'all available Harvey Budget tools with pricing and input requirements', distinguishing it from sibling tools which focus on spending, health, or registration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Use this for discovery' which provides clear context for when to use this tool, though it doesn't explicitly mention alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_agentAInspect
Register an agent with daily and weekly budget limits. Free tool - call this before using check_spend or report_spend. Upserts if agent already exists.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Unique agent identifier (e.g. your wallet address or agent name) | |
| category_limits | No | Optional per-category weekly limits (e.g. {security: 1.0, content: 2.0}) | |
| daily_limit_usd | Yes | Maximum daily spending in USD | |
| weekly_limit_usd | Yes | Maximum weekly spending in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description covers key behaviors: it is free (no cost), upserts (idempotent), and is a required setup step. However, it does not disclose what happens on failure or if limits are exceeded.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding distinct value: core function, usage hint, and behavioral note (upsert). Front-loaded with key information, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a registration tool with no output schema, the description covers the essential aspects: what it does, when to use it, and its idempotent nature. It could be more complete by specifying what 'register' means in terms of system state, but what is given is sufficient for an agent to act.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description only mentions 'daily and weekly budget limits' without additional context for parameters like category_limits. It adds no extra value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool registers an agent with daily and weekly budget limits, using a specific verb and resource. It distinguishes itself from sibling tools by indicating it is a prerequisite for check_spend and report_spend.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool ('call this before using check_spend or report_spend'), providing clear ordering context. It also labels itself as 'Free tool', implying no cost barriers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_spendAInspect
Record a completed spend with optional outcome tracking. Call this AFTER paying for a service to track spending and build ROI history.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Your agent identifier | |
| category | Yes | Spending category | |
| amount_usd | Yes | Amount spent in USD | |
| service_id | Yes | Service that was used | |
| value_received | No | Description of value received | |
| outcome_success | No | Whether the service call succeeded |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It mentions recording and optional outcome tracking, but does not specify side effects, idempotency, or what happens on duplicate calls. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The first sentence states the core purpose, and the second adds essential usage guidance. Well-structured and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the parameter count and schema coverage, the description plus schema provide a clear picture of what the tool does. It lacks details on return values or error handling, but for a record-keeping tool this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all parameters, so the schema already provides meaning. The description adds only the hint that outcome tracking is optional, which is already implied by the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Record a completed spend') and the resource ('spend'). It also specifies the timing ('AFTER paying') and optional outcome tracking, distinguishing it from sibling tools like 'check_spend' and 'get_spending_report'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises when to use the tool ('Call this AFTER paying for a service'), providing clear context. It does not explicitly mention when not to use it or name alternatives, but the sibling tools suggest other spending-related operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!