Skip to main content
Glama

Server Details

Agent spending mgmt, budget tracking, ROI. Zero Core Budget.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
meltingpixelsai/harvey-budget
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a distinct purpose: pre-spend approval, post-spend recording, analytics, server health, tool discovery, and agent registration. No overlaps.

Naming Consistency4/5

Most tools follow a consistent verb_noun snake_case pattern (check_spend, get_spending_report, report_spend, register_agent). 'health' deviates slightly but is a common single-word command. Minor inconsistency.

Tool Count5/5

Six tools cover the core workflow of budget management (register, check, report, analyze) plus utilities (health, list_tools). Well-scoped for the domain.

Completeness4/5

Covers the essential lifecycle: agent setup, pre-spend validation, spend recording, and analytics. Missing explicit category limit management, but registration supports upsert, partially addressing updates.

Available Tools

6 tools
check_spendAInspect

Pre-spend budget approval check. Verifies daily/weekly/category limits, estimates ROI from historical data, and suggests cheaper alternatives. Call this BEFORE paying for any service.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesYour agent identifier (must be registered first)
categoryYesSpending category
amount_usdYesAmount you're about to spend in USD
service_idYesService you want to pay for (e.g. 'harvey-tools/scrape_url')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description clarifies the tool is a pre-spend check (non-destructive) and outlines actions (verify, estimate, suggest). It does not specify side effects or auth requirements beyond the schema, but the guidance is sufficient for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences with the core purpose front-loaded. Every sentence adds value without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's actions but does not specify the output format, such as whether it returns a boolean approval, estimated ROI, or suggested alternatives. Given no output schema, this gap reduces completeness for an agent deciding how to use the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema provides full descriptions for all 4 parameters (100% coverage). The description adds no new parameter-specific details beyond the schema, such as format expectations for 'agent_id' or 'service_id'. Baseline score of 3 is appropriate given schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a pre-spend budget approval check, specifying that it verifies limits, estimates ROI, and suggests cheaper alternatives. This distinguishes it from siblings like 'get_spending_report' (which likely does not perform approval checks).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly instructs 'Call this BEFORE paying for any service,' providing clear when-to-use guidance. It does not name alternatives but the context and sibling names imply the appropriate usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_spending_reportAInspect

Detailed spending analytics report. Returns total spent, category breakdown, top services with ROI, and optimization recommendations. Use to understand spending patterns and find inefficiencies.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodYesReport period
agent_idYesYour agent identifier
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are present, the description is the sole source. It describes the output but does not disclose any behavioral traits such as side effects, permissions, or rate limits. This is insufficient for a report tool that might have data freshness or caching considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two front-loaded sentences that immediately convey the tool's purpose and output. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema, the description adequately covers what the report contains. The parameters are simple and well-documented. However, it could mention if the report is real-time or cached, which is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so both parameters are documented. The description does not add additional meaning beyond the schema's enum and descriptions. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a detailed spending analytics report with specific components (total spent, category breakdown, etc.), and it distinguishes from siblings like 'check_spend' by focusing on in-depth analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states the use case ('understand spending patterns and find inefficiencies') but does not provide guidance on when alternative tools like 'check_spend' might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healthAInspect

Check Harvey Budget server status, uptime, and payment network configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the burden. It states what is checked (status, uptime, config) but does not mention side effects (likely none), authentication requirements, rate limits, or error behavior. Adequate for a basic health check but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is front-loaded and efficiently conveys the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain return values. It only mentions what is checked but not the format of the response (e.g., JSON with status flags). Completely adequate for a simple health endpoint but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, making schema description coverage 100%. According to the guidelines, with 0 parameters, the baseline is 4. The description adds no parameter info because there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking server status, uptime, and payment network configuration. It uses a specific verb ('Check') and resource, and distinguishes itself from siblings like 'check_spend' which likely deals with spending.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the name 'health' and the description indicating server monitoring. However, no explicit guidance on when to use this tool versus alternatives (e.g., check_spend) or any exclusions (e.g., 'use this for overall health, not for specific spending queries').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_toolsAInspect

List all available Harvey Budget tools with pricing and input requirements. Use this for discovery.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions listing pricing and input requirements, but lacks details on auth, rate limits, or side effects. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, front-loaded with the main action, no wasted words. Excellent conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple purpose, description covers basic what and when, but lacks details on output format or any limitations. Adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in schema, so baseline is 4. Description does not need to add param info, and it doesn't, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies verb 'List' and resource 'all available Harvey Budget tools with pricing and input requirements', distinguishing it from sibling tools which focus on spending, health, or registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States 'Use this for discovery' which provides clear context for when to use this tool, though it doesn't explicitly mention alternatives or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Register an agent with daily and weekly budget limits. Free tool - call this before using check_spend or report_spend. Upserts if agent already exists.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUnique agent identifier (e.g. your wallet address or agent name)
category_limitsNoOptional per-category weekly limits (e.g. {security: 1.0, content: 2.0})
daily_limit_usdYesMaximum daily spending in USD
weekly_limit_usdYesMaximum weekly spending in USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description covers key behaviors: it is free (no cost), upserts (idempotent), and is a required setup step. However, it does not disclose what happens on failure or if limits are exceeded.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each adding distinct value: core function, usage hint, and behavioral note (upsert). Front-loaded with key information, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with no output schema, the description covers the essential aspects: what it does, when to use it, and its idempotent nature. It could be more complete by specifying what 'register' means in terms of system state, but what is given is sufficient for an agent to act.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description only mentions 'daily and weekly budget limits' without additional context for parameters like category_limits. It adds no extra value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers an agent with daily and weekly budget limits, using a specific verb and resource. It distinguishes itself from sibling tools by indicating it is a prerequisite for check_spend and report_spend.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool ('call this before using check_spend or report_spend'), providing clear ordering context. It also labels itself as 'Free tool', implying no cost barriers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_spendAInspect

Record a completed spend with optional outcome tracking. Call this AFTER paying for a service to track spending and build ROI history.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesYour agent identifier
categoryYesSpending category
amount_usdYesAmount spent in USD
service_idYesService that was used
value_receivedNoDescription of value received
outcome_successNoWhether the service call succeeded
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It mentions recording and optional outcome tracking, but does not specify side effects, idempotency, or what happens on duplicate calls. This is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The first sentence states the core purpose, and the second adds essential usage guidance. Well-structured and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the parameter count and schema coverage, the description plus schema provide a clear picture of what the tool does. It lacks details on return values or error handling, but for a record-keeping tool this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all parameters, so the schema already provides meaning. The description adds only the hint that outcome tracking is optional, which is already implied by the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Record a completed spend') and the resource ('spend'). It also specifies the timing ('AFTER paying') and optional outcome tracking, distinguishing it from sibling tools like 'check_spend' and 'get_spending_report'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use the tool ('Call this AFTER paying for a service'), providing clear context. It does not explicitly mention when not to use it or name alternatives, but the sibling tools suggest other spending-related operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.