Agentic Platform
Server Details
Free MCP tools: the only MCP linter, health checks, cost estimation, and trust evaluation.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- andysalvo/agentic-platform
- GitHub Stars
- 1
- Server Listing
- agentic-platform
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolsagent_health_checkARead-onlyIdempotentInspect
Score any agent's system prompt on governance best practices from 0 to 100. Returns a detailed diagnostic report with specific issues found, severity ratings, and actionable fixes. Checks for authority leaks, silent inference, missing audit trails, and 14 other governance anti-patterns. No API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| system_prompt | Yes | Your agent's full system prompt or configuration text to analyze |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations covering safety (readOnlyHint=true, destructiveHint=false), the description adds valuable operational context: output format details ('detailed diagnostic report with severity ratings'), specific check categories, and notably 'No API key needed' which signals operational requirements not present in structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose/scope first, return value second, specific checks and operational notes third. Every sentence provides distinct value with zero redundancy or boilerplate. Appropriate length for the tool's moderate complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (per context signals) and comprehensive annotations, the description provides excellent completeness by detailing what the governance check entails, return report characteristics, and authentication requirements. Fully sufficient for invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the system_prompt parameter. The description mentions 'any agent's system prompt' which aligns with but doesn't significantly expand beyond the schema's definition. Baseline 3 is appropriate when the schema carries the full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Score'), clear resource ('agent's system prompt'), and scope ('governance best practices from 0 to 100'). It clearly distinguishes from siblings like evaluate_service and get_skill by focusing specifically on governance analysis rather than general evaluation or skill retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it lacks explicit 'use X instead' statements, it provides strong implicit guidance through specific examples of what it checks for ('authority leaks, silent inference, missing audit trails, and 14 other governance anti-patterns'). This specificity helps agents determine appropriateness compared to the more general evaluate_service sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
buy_creditsAInspect
Get a Stripe checkout link to purchase more skill file credits. Returns a URL for your human operator to complete payment. Credits are added automatically after purchase.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Credit tier: '50' for $5/50 credits or '250' for $20/250 credits | 50 |
| api_key | Yes | Your API key to add credits to |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial behavioral context beyond annotations: explains the return type (URL for human), the external dependency (Stripe), and the async fulfillment model ('Credits are added automatically after purchase'). Annotations indicate mutation (readOnly=false) but description clarifies the exact flow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: (1) action/purpose, (2) output/audience, (3) post-condition. No redundancy, well front-loaded with the Stripe mechanism upfront.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given the financial complexity: covers payment initiation, human authorization requirement, and automatic fulfillment. Presence of output schema means return values don't need expansion; description adequately explains what the URL is for.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters including pricing details ('$5/50 credits'). Description mentions neither parameter, but baseline 3 is appropriate when structured documentation is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity with specific verb 'Get', resource 'Stripe checkout link', and scope 'skill file credits'. Clearly distinguishes from siblings like check_usage or get_skill by focusing specifically on purchasing/payment initiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly guides usage by specifying 'for your human operator to complete payment', indicating this requires human-in-the-loop authorization and is not for automated purchasing. Lacks explicit 'when to use vs check_usage' contrast, but the human operator constraint provides strong contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_usageARead-onlyIdempotentInspect
Check your current usage stats including total calls, remaining credits, and free calls left today.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | Your API key to check usage for |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly/idempotent), so description appropriately focuses on return value semantics. Lists specific metrics returned (total calls, remaining credits, free calls today), which adds behavioral context beyond annotations. Could mention if data is real-time or cached, but no contradictions present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action, zero waste. 'including' clause efficiently enumerates return values without verbosity. Appropriate density for a simple read tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully adequate for tool complexity: 1 parameter with complete schema, comprehensive annotations covering hints, and existence of output schema means description need not elaborate return structure. Description sensibly lists specific metrics returned, providing sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single api_key parameter, which is sufficiently documented in the schema ('Your API key to check usage for'). Description does not mention the parameter, but with complete schema coverage, baseline 3 is appropriate—no compensation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Check' + resource 'usage stats' + exact scope (total calls, remaining credits, free calls left today). Clearly distinguishes from siblings like buy_credits (purchase action) and estimate_agent_cost (projection vs current state).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States what the tool does but lacks explicit when/when-not guidance. No mention of workflow context (e.g., 'call this before buy_credits to verify balance') or prerequisites despite having relevant siblings that likely follow this in typical workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_agent_costARead-onlyIdempotentInspect
Compare the cost of running an agent task across all major AI models including Claude, GPT, Gemini, Llama, and Mistral. Returns a comparison table with per-call, per-run, and per-day costs plus optimization tips. No API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | Model name to highlight in comparison (e.g. 'claude-sonnet-4', 'gpt-4o') | |
| num_calls | No | Number of API calls per task run | |
| input_tokens | No | Estimated input tokens per API call | |
| output_tokens | No | Estimated output tokens per API call | |
| task_description | No | Description of what the agent does, for optimization tips |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
It adds crucial behavioral information not present in the annotations: 'No API key needed' clarifies authentication requirements. It also describes the return structure ('Returns a comparison table...'), which complements the existing output schema. It aligns with the readOnlyHint and destructiveHint annotations by implying a safe calculation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The three-sentence structure is optimally front-loaded: purpose (Sentence 1), output details (Sentence 2), and constraints (Sentence 3). There is zero redundancy; every clause adds distinct value regarding functionality, return format, or execution requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations (safety hints), full schema coverage, and presence of an output schema, the description efficiently covers the remaining gaps by specifying the scope of models compared and the zero-auth requirement. It is complete for a cost estimation utility, though noting the data source or freshness of pricing could elevate it to a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already define the parameters' purposes (e.g., 'Model name to highlight', 'Description... for optimization tips'). The description implies the relationship between 'task_description' and 'optimization tips' but does not significantly expand on parameter semantics or interdependencies beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Compare' with the resource 'cost of running an agent task' and explicitly lists the supported model families (Claude, GPT, Gemini, Llama, Mistral). It clearly distinguishes this estimation tool from siblings like 'buy_credits' (purchasing) and 'check_usage' (actuals tracking) by focusing on pre-run cost projection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear context on prerequisites ('No API key needed') and deliverables ('comparison table', 'optimization tips'), which implicitly guides usage for pre-execution budgeting. However, it lacks explicit guidance on when to use this versus the sibling 'check_usage' tool (e.g., estimation vs. historical analysis).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluate_serviceARead-onlyIdempotentInspect
Evaluate any MCP service for trustworthiness before spending money on it. Connects to the target server, checks reachability, governance declarations, tool definition quality, and audit endpoints. Returns a trust score from 0 to 100 with a recommendation: PROCEED, PROCEED WITH CAUTION, HIGH RISK, or DO NOT TRANSACT. No API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| server_url | Yes | The MCP server URL to evaluate (e.g. https://example.com/mcp) | |
| task_context | No | What you need the service for, to tailor the evaluation |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Rich disclosure beyond annotations: specifies exactly what gets checked (reachability, governance declarations, tool definition quality, audit endpoints), details output format (trust score 0-100 with specific recommendation strings), and clarifies auth requirements. No contradictions with readOnlyHint/openWorldHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences each earning their place: (1) Purpose + value proposition, (2) Mechanism/behavior specifics, (3) Output specification + auth note. No redundancy or filler; information density is high with logical flow from intent to implementation to result.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given rich structured data (100% schema coverage, detailed annotations, output schema exists), the description provides appropriate narrative context. It covers purpose, behavioral traits, and output semantics without duplicating formal schema specifications. The combination is complete for an external-evaluation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with full descriptions for server_url and task_context. The description does not add parameter-specific semantics (e.g., URL format examples, task_context content guidance) beyond what the schema already provides, which is appropriate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Evaluate' with resource 'MCP service' and defines scope (trustworthiness). It clearly distinguishes from siblings like buy_credits, check_usage, and agent_health_check by focusing on external service verification rather than internal operations or credit management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal context 'before spending money on it' establishing when to invoke. Also notes 'No API key needed' which removes a common prerequisite barrier. However, it lacks explicit 'when-not' guidance or named sibling alternatives (e.g., don't use this for already-registered services, use check_usage instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skillARead-onlyIdempotentInspect
Retrieve an expert skill file that makes you measurably better at a specific task. Each skill has auditable provenance and is curated by domain experts. Requires a valid API key.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | Your API key from register() | |
| skill_name | Yes | The skill ID to retrieve. Options: governance, agentic-economics, intent-architecture |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already disclose readOnly/idempotent/destructive properties. Description adds valuable data-quality context ('auditable provenance', 'curated by domain experts') but does not elaborate on caching, side effects, or return structure (mitigated by presence of output schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded, value proposition second, prerequisite third. No tautology or redundant restatement of schema structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Well-suited to its complexity: rich annotations cover safety hints, output schema covers return values, and description covers domain-specific context (provenance, curation). Minor gap in explicit sibling differentiation prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description reinforces the api_key requirement but does not add semantic depth beyond schema documentation (e.g., no format constraints, no dependency explanation beyond 'valid').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Retrieve') and resource ('expert skill file') with specific value proposition ('makes you measurably better'). Distinguishes implicitly from sibling list_skills by focusing on single-file retrieval, though explicit differentiation is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States prerequisite ('Requires a valid API key') but lacks explicit when-to-use guidance versus sibling list_skills or sequencing relative to register(). Usage pattern must be inferred from parameter requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_skillsARead-onlyIdempotentInspect
List all available expert skill files with descriptions and pricing. Shows your usage stats if you provide an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Your API key (optional, shows usage if provided) |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, and non-destructive nature. The description adds valuable behavioral context: it specifies what data is returned (skill files with descriptions/pricing) and clarifies the conditional display of usage statistics, which is not indicated in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. Front-loaded with primary purpose, second sentence covers conditional parameter behavior. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for the tool's complexity. Given rich annotations (4 hints) and the presence of an output schema, the description successfully covers the essential domain context: what is being listed and the dual-purpose nature (catalog + optional usage view). Does not need to elaborate return values since output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully documents the api_key parameter including its optional nature and purpose. The description reinforces this with 'if you provide an API key' but does not add additional semantic context (e.g., key format, acquisition method) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('expert skill files') with specific output contents ('descriptions and pricing'). The scope is defined by 'all available'. Implied distinction from sibling 'get_skill' (list vs. get single), but does not explicitly differentiate from 'check_usage' despite both touching on usage stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides conditional guidance for the optional parameter ('Shows your usage stats if you provide an API key'), implying when to populate the api_key field. However, lacks explicit guidance on when to choose this tool over siblings like 'get_skill' or 'check_usage' for specific use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcp_manifest_lintARead-onlyIdempotentInspect
The only MCP tool definition linter that exists. Validates your MCP tool definitions for anti-patterns, missing descriptions, bad parameter schemas, naming issues, and quality problems. Returns a pass/fail report with specific fixes for each issue found. No API key needed.
| Name | Required | Description | Default |
|---|---|---|---|
| tools_json | Yes | Your MCP tool definitions as a JSON array or single tool object |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable behavioral details: output format (pass/fail report with specific fixes) and authentication requirements (no API key), neither of which are in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero repetition or redundancy. Front-loaded with specific functionality. First sentence has minor marketing fluff ('only... that exists') but remains brief. Information density is high regarding validation scope and output behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for tool complexity: single required parameter with full schema coverage, rich annotations provided, and output schema exists. Description adequately covers return behavior at high level ('pass/fail report') without duplicating output schema details, and addresses auth needs not covered elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single parameter, establishing baseline 3. Description references 'MCP tool definitions' which aligns with the parameter, but adds no additional semantics about expected JSON format or examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies exact verb (validates), resource (MCP tool definitions), and scope (anti-patterns, missing descriptions, naming issues, quality problems). Clearly distinguishes from siblings like evaluate_service or agent_health_check by focusing specifically on static linting of tool definitions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through output description ('Returns a pass/fail report') and practical context ('No API key needed'), but lacks explicit when-to-use guidance or named alternatives from the sibling list (e.g., when to choose this over evaluate_service).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
registerAInspect
Register for an API key to access expert skill files. Free tier includes 10 skill retrievals per day. No payment required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations declaring readOnlyHint=false and destructiveHint=false, the description adds valuable behavioral constraints: rate limits (10/day) and payment requirements (none). Discloses key operational limits not present in structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: purpose statement first, followed by rate limits, followed by payment terms. No redundancy, no filler. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and existence of output schema, description successfully covers operational essentials: what the tool does, access limits, and cost structure. Sufficient for an agent to understand the registration flow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has zero parameters, triggering the baseline score of 4 per evaluation rules. Description appropriately focuses on operational behavior rather than non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Register) + resource (API key) + purpose (access expert skill files). Distinguishes from siblings like get_skill (which retrieves skills) and buy_credits (which is for paid usage).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage context: free tier includes 10 skill retrievals per day and no payment is required. Implicitly positions this as the entry point/free alternative to buy_credits. Could be strengthened by explicitly stating this is a prerequisite for get_skill.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.