Skip to main content
Glama

Agent Revenue Copilot

Server Details

MCP and x402 starter audit for legal AI agent revenue routes and payout verification.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Olddun/earn10-clawtasks-deliverables
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 6 of 6 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but buyer_routes and route_triage both deal with recommending routes and could be confused. failure_paths also relates to routes, but its purpose of skipping routes is distinct.

Naming Consistency5/5

All tool names follow a consistent lowercase_with_underscores pattern, using compound nouns (e.g., buyer_routes, payment_status, product_manifest). No mixing of conventions.

Tool Count5/5

With 6 tools, the set is well-scoped for a revenue copilot domain. Each tool covers a specific aspect (routes, payment, product, triage) without being bloated.

Completeness4/5

The tool set covers key informational aspects like routes, payment status, and product manifest, but lacks action-oriented tools (e.g., execute purchase, update settings). Minor gap for a read-only service.

Available Tools

6 tools
buyer_routesAInspect

Return the recommended checkout and fallback purchase routes for a $9.90 starter audit.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure. It only states 'Return', implying a read operation, but lacks details on side effects, authentication requirements, rate limits, or failure modes. Significant gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-formed sentence that is front-loaded with the action and resource. No filler words or unnecessary detail. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no parameters, no annotations, and no output schema. The description explains what is returned and for which product, but it does not describe the structure of the output (e.g., formats or nested routes). For such a simple tool, completeness is adequate but not excellent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters in the input schema (100% coverage). The description adds value by specifying the fixed context ($9.90 starter audit), which is not captured in the schema. With zero parameters, the baseline is 4, and the description meets it by clarifying the hardcoded input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Return' and the specific resource ('recommended checkout and fallback purchase routes') with a concrete price point ($9.90 starter audit). This leaves no ambiguity about what the tool does and distinguishes it from siblings like 'failure_paths' or 'free_playbook'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving purchase routes for a specific starter audit, but it does not provide any when-not-to-use guidance or explicitly differentiate from sibling tools. No direct call to alternatives is given, so the agent must infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

failure_pathsCInspect

Return routes that should not be counted or should be skipped first.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It indicates a read operation ('Return') but does not mention side effects, permissions, or performance characteristics. The phrase 'should not be counted or should be skipped first' is vague about the underlying logic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of a single sentence with no wasted words. However, it could be more informative without adding length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should fully explain the output. It states it returns 'routes that should not be counted or should be skipped first,' which is a start but lacks detail on format, examples, or how the exclusion logic works. For a tool with no parameters and low complexity, this is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and the schema coverage is 100%. According to guidelines, baseline is 4. The description does not add parameter info, but there is nothing to add. The description's mention of the output is not about parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Return routes that should not be counted or should be skipped first' gives a basic purpose using the verb 'Return' and a resource, but it is somewhat vague and does not explicitly distinguish from sibling tools like route_triage or buyer_routes. The meaning of 'failure paths' is not clearly connected to the description.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of context, prerequisites, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

free_playbookAInspect

Return free pre-purchase resources for AI agents trying to earn money legally.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It indicates a read operation but lacks details on what 'resources' means, return format, side effects, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words, front-loaded with the action and resource. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no annotations, and no output schema, the description is adequate but lacks details about return value structure or examples. Sibling names provide some context but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, so the description does not need to explain parameters. Baseline 4 is appropriate as schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Return free pre-purchase resources for AI agents trying to earn money legally.' It specifies the verb (Return), resource (free pre-purchase resources), and target audience, distinguishing it from sibling tools like payment_status or failure_paths.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining free resources before purchase, but does not explicitly state when to use or when not to use, nor does it contrast with sibling tools like buyer_routes or route_triage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

payment_statusAInspect

Return the current direct-payment verification rule and balance-check source.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose safety traits (e.g., read-only, side effects) or operational behavior. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single concise sentence with no extraneous text. Efficiently communicates the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool, but lacks details about return format or structure. Could include brief output hints for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, schema coverage is 100%. The description adds no param info but none is needed; baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns the current direct-payment verification rule and balance-check source. This distinguishes it from sibling tools like buyer_routes or failure_paths.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description only states what it does, not the context for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

product_manifestAInspect

Return the Agent Revenue Copilot product manifest, price, buyer fit, payment routes, and safety rules.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only conveys a read operation via 'Return'. It does not disclose side effects, authorization needs, or error behavior, though 'Return' implies no mutation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that efficiently conveys the tool's purpose without redundant or vague language. Every word serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers the return content adequately. However, it lacks detail on output structure or format, which would be helpful for a tool returning multiple fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description is not required to explain them. Baseline score of 4 is appropriate as nothing is missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the 'Agent Revenue Copilot product manifest' and lists specific contents (price, buyer fit, payment routes, safety rules), making the purpose precise and distinct from sibling tools like 'buyer_routes' or 'payment_status'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings. The description does not specify contexts, prerequisites, or exclusions, leaving the agent to infer its usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

route_triageBInspect

Recommend free playbook, $1.99 triage, or $9.90 starter audit based on the buyer's target and route type.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalNoShort description of the earning or monetization goal.
route_typeNoExamples: one_off, repeated_income, x402, mcp, marketplace, unknown.
target_usdNoApproximate target earning amount in USD, if known.
constraintsNoKnown constraints such as no_kyc, no_deposit, no_social, no_user_funds.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as whether the tool makes external API calls, has side effects, or requires authentication. The recommendation logic remains opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the tool's purpose and output, with no superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description fails to specify the return format or structure of the recommendation. Given no output schema, this omission reduces the tool's completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds only limited value by linking the parameters 'goal' and 'route_type' to the recommendation logic. It does not elaborate on how each parameter influences the output beyond what the schema implies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's action ('Recommend') and the specific outputs (free playbook, $1.99 triage, $9.90 starter audit), clearly distinguishing it from sibling tools like free_playbook or failure_paths.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a buyer's target and route type are known, but lacks explicit guidance on when to choose this tool over alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.