loaditout-mcp-server
OfficialServer Quality Checklist
- Disambiguation4/5
Tools are generally well-distinguished by specific actions (install_skill vs install_batch vs install_pack, search_skills vs get_skill). However, the three installation variants and permission-related tools (request_permission, check_permission, validate_action) require careful reading to distinguish their specific contexts, which could cause minor hesitation in agent selection.
Naming Consistency4/5Strong adherence to snake_case verb_noun convention throughout (e.g., search_skills, save_memory, flag_skill). Minor inconsistencies include 'share_loadout' using domain jargon instead of 'share_profile', and mixing singular/plural nouns (list_categories vs list_my_proofs vs get_skill), though these follow logical REST-like conventions.
Tool Count4/5Nineteen tools is borderline heavy but justified for a comprehensive skill marketplace covering discovery (4), installation (3), permissions (2), memory (2), trust/proofs (3), safety (2), and profile management (2). Each tool serves a distinct purpose without redundancy, though the surface approaches the upper limit of what an agent can easily navigate.
Completeness4/5Covers the full skill lifecycle well: discovery via search/categories/gap analysis, validation, permission workflows, installation methods, usage reporting, and trust verification. Minor gaps include no explicit 'uninstall' or 'list_installed_skills' (though share_loadout partially covers the latter), and no category-specific browsing beyond the top-level list.
Average 4.5/5 across 19 of 19 tools scored. Lowest: 3.9/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.2.7
Tools from this server were used 2 times in the last 30 days.
This repository includes a glama.json configuration file.
- This server provides 21 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return value ('Returns a confirmation') and side effects ('Reviews help other agents and humans decide'), but omits idempotency rules, update/delete capabilities, or rate limiting. Adequate but not comprehensive behavioral coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences are front-loaded with purpose, return value, and usage constraints. Each sentence earns its place, though the rating scale details in the final sentence partially duplicate the schema. Structure flows logically from what it does to when to use it.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately compensates by explaining the return value ('confirmation that the review was recorded') and providing usage context. It covers the essential gaps left by the structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description repeats the rating scale semantics already present in the schema ('1 (unusable) to 5 (excellent)') without adding significant new semantic context for parameters beyond what's in the property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Leave') with clear resource ('rating and optional comment for a skill') and scope. It distinguishes from siblings like flag_skill and report_skill_usage by focusing on post-use experience sharing rather than bug reporting or telemetry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance ('Use this after using a skill') and ethical constraint ('Do not review skills you have not actually used'). Lacks explicit naming of alternatives (e.g., flag_skill for reporting issues), but the when/when-not guidance is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden and succeeds well: it discloses 'permanently recorded' persistence, side effects ('contributes to the skill's quality score', 'build your agent's trust score'), and return value structure ('JSON object with proof_id...'). Missing rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences with zero waste: front-loaded with core action, followed by return values, side effects, positive usage guideline, negative usage guideline, and parameter requirements. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for lack of output schema by describing return values (proof_id, verify_url, shareable_text) and compensates for lack of annotations by describing behavioral impacts. Could improve by clarifying relationship to verify_proof or list_my_proofs siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'Requires the skill slug and a status' but adds no semantic meaning beyond the schema's detailed descriptions and examples for the slug, status enum values, and error_message conditional requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Report the outcome of using a skill, generating a verifiable execution proof,' providing a specific verb (report), resource (skill outcome), and distinguishing it from siblings like verify_proof (which verifies existing proofs) or flag_skill (which likely flags policy issues).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal constraints: 'Use this after every skill invocation' and the negative constraint 'Do not call this before actually using the skill.' However, it lacks differentiation from sibling flag_skill regarding when to report outcomes versus flagging policy violations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and compensates well by detailing the return structure (JSON array with specific fields: slug, name, description, skill_count, tags). It also enumerates the complete set of 10 categories, giving the agent precise expectations about the data volume and content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Information is front-loaded with purpose and usage. The enumeration of all 10 categories is verbose but earns its place by providing the complete enum set. Only the final sentence ('No parameters required') is somewhat redundant given the schema, but overall structure is logical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no output schema and no annotations, the description is comprehensive. It covers purpose, usage context, return format, and specific data values, leaving no critical gaps for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which sets a baseline of 4. The description confirms this with 'No parameters required', providing clarity without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'List' and resource 'skill categories in the Loaditout registry', and quantifies the scope ('all 10'). It clearly distinguishes from siblings like 'search_skills' by emphasizing browsing versus querying.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use: 'when they do not have a specific search query'. This implicitly defines when NOT to use it (specific queries) but stops short of explicitly naming 'search_skills' as the alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return format ('JSON object with request_id and status pending') since no output schema exists, and explains the asynchronous workflow ('Check the request status later'). It lacks explicit mention of error states or timeouts, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose, followed by return values, usage criteria, workflow guidance, and exclusions. Every sentence provides distinct value with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description adequately compensates by explaining return values and the human-in-the-loop workflow. It covers security grading logic and permission types. Minor gap regarding error handling or denial scenarios prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description does not add significant semantic meaning beyond the schema definitions (e.g., it doesn't explain the owner/repo format syntax or provide additional context for the reason field beyond the schema's example). It meets but does not exceed the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action ('Request explicit permission') and resource ('from the human owner') in the context of 'before installing a skill.' It clearly distinguishes itself from siblings by explicitly naming check_permission for status checks and contrasting with direct installation workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive criteria ('Use this for skills with security grade C or F, high risk_level...') and negative constraints ('Do not use this for A-graded skills unless...'). It explicitly names the sibling tool check_permission as the alternative for checking request status later.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and discloses key traits: persistence ('survives across sessions'), return value ('Returns a confirmation'), and security constraints ('Do not store sensitive data'). Minor gap on overwrite behavior/idempotency or size limits prevents a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences total, each earning its place: purpose, return value, use cases, invocation timing, security constraint, and sibling cross-reference. Front-loaded with core action and no redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 3-parameter tool lacking annotations and output schema. Compensates for missing output schema by describing return value ('confirmation with the stored key'). Security warning and persistence guarantee provide necessary operational context. Would benefit from explicit mention of key overwrite behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions and examples for all three parameters. Description provides usage context mapping to parameter values (e.g., 'installed skills' maps to type='install') but does not add technical semantics beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Save' and clear resource 'key-value pair to persistent agent memory'. Explicitly distinguishes from sibling 'recall_memory' in final sentence, clearly delineating the write vs read relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('Call this proactively whenever you learn something worth remembering'), concrete examples ('installed skills, user preferences'), when-NOT-to-use constraint ('Do not store sensitive data'), and names the alternative tool ('Retrieve saved memories with recall_memory').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and compensates well by detailing the exact JSON return structure (agent_key, agent_type, trust_score range, etc.) since no output schema exists. The term 'Get' implies read-only behavior, though it could explicitly state this is non-destructive or idempotent for maximum clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently structured: purpose (sentence 1), return value specification (sentence 2), usage context (sentence 3), and parameter note (sentence 4). Every sentence adds distinct value, with critical information front-loaded and no redundant filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description provides excellent compensation by documenting all return fields and their types. For a parameterless retrieval tool, this is sufficiently complete, though mentioning cache behavior or privacy scope of the 'public profile' would push it to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per guidelines establishes a baseline of 4. The description appropriately confirms this with 'No parameters required,' eliminating any ambiguity about whether the empty schema is intentional or incomplete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('agent's public profile') and enumerates exactly what data is retrieved (skills, trust score, usage stats, URL). It clearly distinguishes from sibling tools like set_profile (which likely writes) and install_skill (which modifies state) by focusing on retrieval and display of current configuration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states two use cases: 'display the agent's current capabilities to the user' and 'share your configuration with other agents.' Also notes 'No parameters required,' aiding invocation. Lacks explicit 'when not to use' guidance (e.g., vs. get_skill for specific skill details), but the primary use cases are clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It effectively documents what the tool inspects (security grade, manifest, injection patterns, recency) and the return structure (safe_to_proceed boolean, risk_level, warnings array). Lacks explicit statement on whether the tool itself is read-only, though implied by 'pre-flight check' context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences each serving distinct purposes: (1) core definition, (2) return value structure, (3) inspection criteria, (4) usage timing, (5) mandatory conditions. No redundant words or tautologies. Information is front-loaded with the essential purpose in the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by enumerating the return fields (safe_to_proceed, risk_level, security_grade, warnings, verified status). The 100% input schema coverage and clear behavioral description provide sufficient context for a safety-validation tool, though it could note rate limits or caching behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description references the concepts (skill slug, action, parameters) but does not add syntax details, format constraints, or examples beyond what the schema already provides. The schema descriptions are comprehensive enough that additional parameter semantics aren't necessary.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb phrase ('Pre-flight safety check') and clearly identifies the resource (actions on skills). It distinguishes itself from siblings like 'check_permission' by emphasizing comprehensive safety validation including 'security grade, safety manifest, parameter injection patterns' rather than just permission status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('before calling any skill action that could have side effects') with concrete examples ('writes, deletes, network requests'). Also provides mandatory exclusion criteria ('Do not skip this step for skills with security grade C or F'), creating clear decision boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the return format (JSON array of 5 skills with specific fields: slug, name, description, quality_score, install_command). However, it doesn't explicitly state whether this is a safe/read-only operation versus the sibling install_* tools, though implied by 'suggest'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured and front-loaded: purpose statement, return value specification, positive usage condition with examples, and negative usage condition with alternatives. Four sentences with zero redundancy—every clause provides actionable information for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a recommendation tool with no output schema. Compensates for missing structured output by detailing the return format (5 ranked skills with specific fields). Sibling differentiation is explicit, and parameter documentation is complete via schema. No gaps given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with detailed descriptions and examples for all three parameters (task, current_tools, agent_type). The description text focuses on purpose and usage rather than repeating parameter semantics, which is appropriate when the schema is self-documenting. Baseline 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb-resource combinations ('analyze a task', 'suggest skills') and clearly defines the scope (filling capability gaps). It distinguishes from siblings by contrasting with 'general browsing' and 'when you already know what skill you need'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use ('when you encounter a task that requires capabilities you do not have') with concrete examples (database access, browser automation). Explicit when-not-to-use naming specific alternatives ('use list_categories instead', 'use search_skills instead'), creating clear decision boundaries between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It extensively documents the return structure (JSON object with specific fields like safety manifest, install configs, etc.), compensating for the lack of output_schema. However, it does not explicitly declare whether the operation is read-only or safe, though 'Get' implies this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences total with clear structure: purpose, return value details, positive usage condition, and negative usage condition. Every sentence provides distinct value. The comprehensive field list for the return object is justified given the absence of an output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description is remarkably complete. It details the exact structure of the returned JSON object (including nested safety manifest fields), explains the prerequisite (knowing the exact slug), and maps relationships to sibling tools (install_skill, search_skills).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with detailed documentation for the 'slug' parameter (format, examples). The description focuses on return values and usage context rather than parameter semantics, so baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get comprehensive details') and resource ('skill'). It explicitly distinguishes from siblings by naming 'search_skills' and 'recommend_skills' as alternatives for discovery, clearly delineating this tool's scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('when you know the exact skill slug and need full metadata before installing') and when-not-to-use ('Do not use this for discovery'), along with named alternatives. This is exemplary guidance for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds: it discloses the return format ('JSON array of proofs' with specific fields) and empty-state behavior ('Returns an empty array if no skills have been reported'). Does not mention auth or rate limits, but covers the essential behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficiently structured sentences with zero waste: sentence 1 defines action and return structure, sentence 2 explains the concept, sentence 3 provides use cases and constraints. Information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no output schema and no annotations, the description is complete. It compensates for the missing output schema by detailing the return structure and fields, covers the empty state, and explains the conceptual model ('agent resume').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters (baseline 4). The description appropriately confirms 'No parameters required', providing clarity without unnecessary elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('List') and resource ('execution proofs'), clarifies scope ('created by this agent'), and distinguishes from siblings by explaining the relationship to 'report_skill_usage' and defining what proofs are ('verifiable record of skill usage').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases ('review your usage history, share proof links, or verify your trust score') and notes the dependency on 'report_skill_usage' for data availability. Lacks explicit 'when not to use' contrasts with siblings like 'verify_proof', but effectively guides the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It compensates well by detailing the exact return structure (JSON array with specific fields including quality_score, security_score, install_command), which substitutes for the missing output schema. It lacks explicit mention of read-only safety or rate limits, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of five sentences that are front-loaded with purpose, followed by output specification, and closed with usage guidelines. Every sentence provides distinct value (scope, return format, positive/negative usage conditions, alternative preference) with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (100% coverage) and lack of output schema, the description appropriately focuses on compensating for the missing return value documentation by enumerating response fields. It provides sufficient context for an agent to select this over siblings like 'get_skill' and understand the discovery workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with detailed explanations for all four parameters (query, type, agent, limit), including examples and enum value descriptions. The description text adds no additional parameter semantics, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Search'), identifies the exact resource ('Loaditout registry of 20,000+ AI agent skills'), and specifies the method ('by keyword'). It clearly distinguishes from sibling tool 'get_skill' by stating when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use ('broad discovery when you do not know the exact skill slug'), when not to use ('Do not use this if you already know the slug'), and names specific alternatives ('use get_skill instead', 'Prefer smart_search'). This covers when/when-not/alternatives completely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Successfully explains return value ('confirmation of the update'), side effects (profile appears at specific public URL), and mutation semantics (fields are optional allowing partial updates). Could mention reversibility or auth requirements, but covers essential behaviors well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, each earning its place: (1) core purpose, (2) return value, (3) public visibility context, (4) usage timing, (5) parameter semantics. Well-structured with purpose front-loaded and logical flow from what it does to when to use it.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a simple 2-parameter tool. Compensates for missing output schema by describing the confirmation return value. Covers public visibility implications, optional parameter behavior, and setup context. No gaps requiring additional documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with examples and constraints. Description adds crucial behavioral context that both fields are optional, enabling partial updates ('you can update just the name or just the bio'), which isn't obvious from schema alone since both properties lack 'required' status but description clarifies the intended usage pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Set or update' verbs, clear resource ('agent's public profile'), and explicit fields ('display name and bio'). Distinct from sibling tools which focus on skills, permissions, and memory rather than profile management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'once during initial setup or when the user wants to customize their agent's public identity.' Provides clear context for invocation timing. Lacks explicit 'when not to use' or named alternatives, though siblings are semantically distinct enough that this isn't critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return structure (JSON object with status, skill slug, agent, timestamp) and possible status values (valid/invalid). Minor gap: does not explicitly state if operation is safe/read-only or mention error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) purpose and return value, (2-3) usage scenarios, (4) exclusion constraint. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter verification tool, description is complete. Compensates for missing output schema by detailing return fields. Establishes clear relationship with sibling 'list_my_proofs'. No critical gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter description including format ('lp_' + hex) and source. Description mentions 'by its ID' but does not add semantic meaning beyond what the schema already provides, which is appropriate for baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb (verify) + resource (execution proof) + scope (by ID). Explicitly distinguishes from sibling tool 'list_my_proofs' via the exclusion clause, and implicitly contrasts with 'report_skill_usage' (creation vs verification).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios (confirm another agent's claimed usage, validate own proofs before sharing) and explicit when-not-to-use guidance ('Do not use this for listing proofs') with named alternative (list_my_proofs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return format ('formatted text response listing each skill...with its name, type, slug, and platform-specific install config') and explains pack semantics ('pre-built collections for specific workflows'). Lacks disclosure of permission requirements or idempotency given mutation nature, but covers output behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences total, zero waste. Front-loaded with action, followed by return value, conceptual context, and usage guidelines. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter mutation tool with no output schema, description adequately covers return values and usage patterns. Minor gap: given siblings include check_permission/request_permission, description could note permission prerequisites, but overall sufficiently complete for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions, but description adds valuable conceptual context for 'slug' parameter by explaining packs are 'pre-built collections for specific workflows' with concrete examples ('research-agent' has browser, search, and memory tools), enhancing parameter understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Install') + resource ('skills from a curated Agent Pack') + scope ('all skills...in a single call'). Explicitly distinguishes from sibling 'install_skill' by contrasting bulk pack installation vs individual skill installation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'Use this instead of installing skills individually when setting up for a specific role' (when to use) and 'Do not use this if you only need one specific skill (use install_skill instead)' (when not to use + alternative named).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and discloses return format (JSON object with enumerated status values), polling pattern, and human-in-the-loop dependency. Could improve by mentioning error states or polling intervals.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences perfectly structured: 1) Action + return value, 2) Usage context, 3) Prerequisites. Zero waste, front-loaded with critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter polling tool without output schema, description fully compensates by documenting return structure (status enum) and complete workflow context, leaving no critical gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3), but description adds valuable semantic context about the parameter's origin ('from a prior request_permission call') and validity requirements ('valid request_id').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with resource 'status of a previously submitted permission request', clearly distinguishing it from sibling 'request_permission' by defining its role in the polling workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance ('Use this after calling request_permission') and prerequisites ('Do not call this without a valid request_id'), clearly defining when to use versus the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return behavior ('Returns a confirmation'), explains async downstream effects ('reviewed by moderators and may be removed or downgraded'). Minor gap on idempotency or rate limits prevents a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, zero waste: purpose, return value, positive usage, downstream effects, negative usage/alternative. Logical flow with high information density. Front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully complete for a 3-parameter reporting tool. Compensates for missing output schema by describing return value ('confirmation that the flag was recorded'). Explains moderation workflow. Sibling differentiation handled via explicit contrast with review_skill.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by mapping the abstract 'reason' enum to concrete scenarios ('prompt injection, behaves maliciously...') in the usage guidelines, helping the agent select appropriate enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Report a problematic skill to the Loaditout moderation team'). Clearly distinguishes from sibling 'review_skill' by contrasting flagging vs. reviewing in the final sentence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists positive criteria ('Use this when you encounter... prompt injection, behaves maliciously...') and negative criteria ('Do not use this for skills that simply do not meet your needs'). Names the alternative tool implicitly ('leave a review instead' referencing review_skill).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return structure ('JSON object with results array...and a not_found array') and cardinality limits ('Maximum 20'). Does not mention permission requirements or destructive nature of install operations, though 'not_found' array indicates partial failure handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, all earning place: purpose, return format, usage condition, exclusion/alternative, constraint. Front-loaded with action, no redundancy, efficient density of information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a batch mutation tool without output schema: describes return structure, error handling (invalid slugs), distinguishes from sibling alternatives, and documents cardinality limits. No gaps given the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds minimum cardinality constraint ('2-20' vs schema's 'Maximum 20' only) and intent context ('you know all their slugs'), clarifying that slugs must be predetermined rather than discovered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Install') with resource ('skills') and scope ('multiple specific'). Explicitly distinguishes from sibling install_pack by contrasting 'specific skills' vs 'curated collections'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use ('install 2-20 specific skills at once and you know all their slugs') and when-not-to-use with named alternative ('Do not use this for curated collections (use install_pack instead)'). Clear cardinality constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full safety burden excellently. It explicitly states 'This tool is read-only and does NOT write any files,' discloses the return object structure (config_path, config, instructions), and notes the side effect that 'The skill slug is automatically saved to agent memory.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences, each earning its place: purpose declaration, return value specification, safety assertion, usage guidance, alternative exclusion, and side effect disclosure. Information is front-loaded and no text is wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema and annotations, the description is complete. It explains the return value structure, covers behavioral traits, differentiates from siblings, and documents side effects. Nothing material is missing for an agent to invoke this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for both 'slug' (including format examples) and 'agent' (including file paths). The description references these parameters contextually but does not add semantic meaning beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource ('platform-specific configuration JSON') and scope ('single skill'). It explicitly distinguishes this from siblings by contrasting 'single skill' against batch installs and packs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('when the user wants to install a specific skill and you need the exact config') and clear when-not-to-use exclusions with named alternatives ('Do not use this for batch installs (use install_batch instead) or for packs (use install_pack instead)').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden: it discloses the return format ('JSON object with a memories array'), field structure ('key, value, and type'), and edge case behavior ('Returns an empty array if no memories exist'). Minor gap: doesn't mention scope (user-specific vs global) or durability guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose, return structure, usage context, and filtering behavior. Information is front-loaded and each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately explains return values (structure and empty-state behavior). For a single-parameter retrieval tool, the description covers all necessary behavioral and semantic ground.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds semantic context by framing the parameter as filtering 'specific categories of memories,' reinforcing the enum's purpose beyond the schema's literal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Retrieve') and resource ('memories from persistent storage'), clearly distinguishing this read operation from the sibling 'save_memory' write operation. The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('at the start of every session to restore context, installed skills, and user preferences'), providing clear temporal and contextual guidance that differentiates it from general memory queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/loaditoutadmin/loaditout-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server