agentictrade
Server Details
AI service marketplace — agents discover, call, and pay for API services automatically.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- JudyaiLab/agent-commerce-framework
- GitHub Stars
- 2
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose with no overlap: getting an agent, getting reputation, getting a service, listing categories, and searching services. The descriptions specify unique resources and actions, making misselection unlikely.
All tools follow a consistent 'marketplace_verb_noun' pattern (e.g., marketplace_get_agent, marketplace_search). This predictable naming scheme enhances readability and usability across the set.
With 5 tools, this server is well-scoped for a marketplace domain. Each tool earns its place by covering essential operations like retrieval, listing, and searching, without being overly sparse or bloated.
The tool set covers core marketplace operations such as retrieving agents, services, and reputations, listing categories, and searching. A minor gap exists in lacking update or creation tools (e.g., for agents or services), but agents can work around this for read-only use cases.
Available Tools
5 toolsmarketplace_get_agentBRead-onlyInspect
Get an agent identity by agent ID.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | The unique agent ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits: read-only, non-destructive, and closed-world. The description adds no behavioral context beyond this, such as error handling or response format. It doesn't contradict annotations, so it meets the baseline for tools with good annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with good annotations and full schema coverage, the description is minimally adequate. However, without an output schema, it lacks details on return values (e.g., agent data structure), leaving gaps in completeness for agent retrieval contexts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'agent_id' well-documented in the schema. The description implies retrieval by ID but adds no extra meaning, such as ID format or sourcing. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('agent identity'), specifying it retrieves by agent ID. It's specific but doesn't differentiate from sibling tools like marketplace_get_repository or marketplace_get_service, which follow similar patterns for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, context for agent ID retrieval, or contrast with sibling tools like marketplace_search for broader queries, leaving usage decisions unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marketplace_get_reputationARead-onlyInspect
Get reputation records for an agent or service. Provide agent_id for agent reputation, service_id for service reputation.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Period filter, e.g. '2026-03' or 'all-time'. | all-time |
| agent_id | No | Agent ID to look up reputation for. | |
| service_id | No | Service ID to look up reputation for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds no additional behavioral traits beyond this, such as rate limits, authentication needs, or data format details. It does not contradict annotations, but provides minimal extra context, meeting the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences that are front-loaded with the main purpose and usage guidance. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), annotations cover safety and scope, and the description clarifies parameter usage. However, it lacks details on return values (e.g., what reputation records include) and potential errors, which would be helpful since there's no output schema. It is mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters like period, agent_id, and service_id. The description adds marginal value by clarifying the purpose of agent_id and service_id ('for agent reputation' and 'for service reputation'), but does not provide syntax or format details beyond the schema. This aligns with the baseline score when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('reputation records'), specifies the target entities ('agent or service'), and distinguishes it from siblings like marketplace_get_agent or marketplace_get_service by focusing on reputation rather than general agent/service details. It provides specific differentiation beyond just the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool by specifying 'Provide agent_id for agent reputation, service_id for service reputation,' which helps the agent understand the parameter-based usage. However, it does not explicitly mention when not to use it or name alternatives (e.g., using marketplace_get_agent for non-reputation data), so it falls short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marketplace_get_serviceARead-onlyInspect
Get full details of a single marketplace service by its ID.
| Name | Required | Description | Default |
|---|---|---|---|
| service_id | Yes | The unique service ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and closed-world behavior. The description adds value by specifying that it retrieves 'full details' (implying comprehensive output) and operates on a 'single' item, which provides useful context beyond the annotations. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any redundant or unnecessary information. It is front-loaded and perfectly sized for its function, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is complete enough for an AI agent to understand and invoke it. It covers the core purpose and parameter usage adequately, though it could benefit from more explicit usage guidelines or output details to reach a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'service_id' fully documented in the schema. The description adds minimal semantic context by reinforcing that the ID is used to identify the service, but does not provide additional details like format examples or constraints beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('a single marketplace service by its ID'), distinguishing it from siblings like marketplace_list_categories (list) and marketplace_search (search). It precisely communicates the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need detailed information about a specific service identified by ID, but it does not explicitly state when to use this tool versus alternatives like marketplace_search (for broader queries) or marketplace_get_agent (for different resource types). The context is clear but lacks explicit guidance on exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marketplace_list_categoriesARead-onlyInspect
List all service categories with the number of active services in each.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds context about returning 'number of active services in each' category, which is useful behavioral detail not in annotations. However, it lacks information on response format, pagination, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is front-loaded with the core purpose and includes essential detail about service counts. Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, read-only, no output schema), the description is complete enough for basic use. It specifies what is listed and includes service counts, but lacks details on output structure or limitations, which would be helpful for an agent to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the empty input. The description adds no parameter-specific information, but this is acceptable as there are no parameters to explain. Baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all service categories', specifying the exact scope of what is returned. It distinguishes from sibling tools by focusing on categories with service counts rather than individual services, agents, reputation, or search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving category-level information, but does not explicitly state when to use this tool versus alternatives like marketplace_search for filtering or marketplace_get_service for specific services. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marketplace_searchBRead-onlyInspect
Search marketplace services by query string, category, or tags. Returns a list of matching active service listings.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Filter by one or more tags. | |
| limit | No | Maximum results to return (default 20, max 100). | |
| query | No | Free-text search term for service name or description. | |
| offset | No | Pagination offset (default 0). | |
| category | No | Filter by service category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, so the agent knows this is a safe, non-destructive read operation with limited scope. The description adds value by specifying that it returns 'active service listings,' which provides context beyond annotations. However, it lacks details on rate limits, authentication needs, or error behaviors, keeping the score moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information: the action (search), parameters (query, category, tags), and outcome (list of active service listings). There is no wasted text, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with 5 parameters), annotations cover safety and scope, but there is no output schema. The description adequately explains the return type ('list of matching active service listings'), but it lacks details on response format, pagination behavior, or error handling. This is sufficient for basic use but has gaps for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all 5 parameters (e.g., 'Filter by one or more tags' for tags, 'Free-text search term' for query). The description adds minimal semantics by mentioning 'query string, category, or tags' but doesn't provide additional syntax or usage details beyond the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search marketplace services by query string, category, or tags. Returns a list of matching active service listings.' It specifies the verb (search), resource (marketplace services), and scope (active service listings). However, it doesn't explicitly differentiate from sibling tools like marketplace_get_service or marketplace_list_categories, which prevents a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like marketplace_get_service (for specific services) or marketplace_list_categories (for listing categories), nor does it specify prerequisites or exclusions. This leaves the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!