Agent Revenue Copilot
Server Details
MCP and x402 starter audit for legal AI agent revenue routes and payout verification.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Olddun/earn10-clawtasks-deliverables
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 6 of 6 tools scored. Lowest: 2.7/5.
Most tools have distinct purposes, but buyer_routes and route_triage both deal with recommending routes and could be confused. failure_paths also relates to routes, but its purpose of skipping routes is distinct.
All tool names follow a consistent lowercase_with_underscores pattern, using compound nouns (e.g., buyer_routes, payment_status, product_manifest). No mixing of conventions.
With 6 tools, the set is well-scoped for a revenue copilot domain. Each tool covers a specific aspect (routes, payment, product, triage) without being bloated.
The tool set covers key informational aspects like routes, payment status, and product manifest, but lacks action-oriented tools (e.g., execute purchase, update settings). Minor gap for a read-only service.
Available Tools
6 toolsbuyer_routesAInspect
Return the recommended checkout and fallback purchase routes for a $9.90 starter audit.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full behavioral disclosure. It only states 'Return', implying a read operation, but lacks details on side effects, authentication requirements, rate limits, or failure modes. Significant gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, well-formed sentence that is front-loaded with the action and resource. No filler words or unnecessary detail. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no parameters, no annotations, and no output schema. The description explains what is returned and for which product, but it does not describe the structure of the output (e.g., formats or nested routes). For such a simple tool, completeness is adequate but not excellent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters in the input schema (100% coverage). The description adds value by specifying the fixed context ($9.90 starter audit), which is not captured in the schema. With zero parameters, the baseline is 4, and the description meets it by clarifying the hardcoded input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Return' and the specific resource ('recommended checkout and fallback purchase routes') with a concrete price point ($9.90 starter audit). This leaves no ambiguity about what the tool does and distinguishes it from siblings like 'failure_paths' or 'free_playbook'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving purchase routes for a specific starter audit, but it does not provide any when-not-to-use guidance or explicitly differentiate from sibling tools. No direct call to alternatives is given, so the agent must infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
failure_pathsCInspect
Return routes that should not be counted or should be skipped first.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It indicates a read operation ('Return') but does not mention side effects, permissions, or performance characteristics. The phrase 'should not be counted or should be skipped first' is vague about the underlying logic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of a single sentence with no wasted words. However, it could be more informative without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should fully explain the output. It states it returns 'routes that should not be counted or should be skipped first,' which is a start but lacks detail on format, examples, or how the exclusion logic works. For a tool with no parameters and low complexity, this is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and the schema coverage is 100%. According to guidelines, baseline is 4. The description does not add parameter info, but there is nothing to add. The description's mention of the output is not about parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Return routes that should not be counted or should be skipped first' gives a basic purpose using the verb 'Return' and a resource, but it is somewhat vague and does not explicitly distinguish from sibling tools like route_triage or buyer_routes. The meaning of 'failure paths' is not clearly connected to the description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of context, prerequisites, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
free_playbookAInspect
Return free pre-purchase resources for AI agents trying to earn money legally.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It indicates a read operation but lacks details on what 'resources' means, return format, side effects, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no unnecessary words, front-loaded with the action and resource. Highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no annotations, and no output schema, the description is adequate but lacks details about return value structure or examples. Sibling names provide some context but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so the description does not need to explain parameters. Baseline 4 is appropriate as schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Return free pre-purchase resources for AI agents trying to earn money legally.' It specifies the verb (Return), resource (free pre-purchase resources), and target audience, distinguishing it from sibling tools like payment_status or failure_paths.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining free resources before purchase, but does not explicitly state when to use or when not to use, nor does it contrast with sibling tools like buyer_routes or route_triage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
payment_statusAInspect
Return the current direct-payment verification rule and balance-check source.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description does not disclose safety traits (e.g., read-only, side effects) or operational behavior. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence with no extraneous text. Efficiently communicates the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter tool, but lacks details about return format or structure. Could include brief output hints for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, schema coverage is 100%. The description adds no param info but none is needed; baseline for 0 params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the current direct-payment verification rule and balance-check source. This distinguishes it from sibling tools like buyer_routes or failure_paths.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description only states what it does, not the context for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
product_manifestAInspect
Return the Agent Revenue Copilot product manifest, price, buyer fit, payment routes, and safety rules.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only conveys a read operation via 'Return'. It does not disclose side effects, authorization needs, or error behavior, though 'Return' implies no mutation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that efficiently conveys the tool's purpose without redundant or vague language. Every word serves a clear function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers the return content adequately. However, it lacks detail on output structure or format, which would be helpful for a tool returning multiple fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description is not required to explain them. Baseline score of 4 is appropriate as nothing is missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the 'Agent Revenue Copilot product manifest' and lists specific contents (price, buyer fit, payment routes, safety rules), making the purpose precise and distinct from sibling tools like 'buyer_routes' or 'payment_status'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings. The description does not specify contexts, prerequisites, or exclusions, leaving the agent to infer its usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
route_triageBInspect
Recommend free playbook, $1.99 triage, or $9.90 starter audit based on the buyer's target and route type.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Short description of the earning or monetization goal. | |
| route_type | No | Examples: one_off, repeated_income, x402, mcp, marketplace, unknown. | |
| target_usd | No | Approximate target earning amount in USD, if known. | |
| constraints | No | Known constraints such as no_kyc, no_deposit, no_social, no_user_funds. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as whether the tool makes external API calls, has side effects, or requires authentication. The recommendation logic remains opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently communicates the tool's purpose and output, with no superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description fails to specify the return format or structure of the recommendation. Given no output schema, this omission reduces the tool's completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds only limited value by linking the parameters 'goal' and 'route_type' to the recommendation logic. It does not elaborate on how each parameter influences the output beyond what the schema implies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's action ('Recommend') and the specific outputs (free playbook, $1.99 triage, $9.90 starter audit), clearly distinguishing it from sibling tools like free_playbook or failure_paths.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a buyer's target and route type are known, but lacks explicit guidance on when to choose this tool over alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!