MCP Midtrans Documentation Server
Server Quality Checklist
- Disambiguation4/5
Tools are mostly distinct in purpose, though slight overlap exists between midtrans_get_code_example (which includes webhook handlers among broader integration code) and midtrans_get_notification_handler (focused specifically on webhooks). The charge example and payment method guide tools also share related content but serve different use cases (JSON generation vs. comprehensive guides).
Naming Consistency4/5Eight tools follow the consistent pattern midtrans_get_<resource>, while midtrans_search_documentation uses a different verb (search instead of get). Otherwise, all use lowercase snake_case consistently and include the midtrans prefix, making the deviation minor and the overall pattern predictable.
Tool Count5/5Nine tools is well-suited for a documentation server, covering code examples, JSON schemas, payment method guides, status references, search capability, documentation navigation, and specialized handlers without being excessive or sparse.
Completeness4/5The surface covers documentation retrieval comprehensively with topics, schemas, examples, guides, and search. Minor gaps might include specialized resources like SDK-specific documentation or changelogs, though these may be accessible via the general documentation topic parameter.
Average 3.9/5 across 9 of 9 tools scored. Lowest: 3.2/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v2.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and idempotentHint=true. The description adds value by specifying the return format ('field table, example JSON, and usage notes') and listing all available object types, which helps the agent understand the scope of valid inputs. However, it does not mention caching behavior or whether this retrieves static reference data versus dynamic schemas.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose but spends significant space listing 17 object types that are already defined in the schema enum. The Args/Returns structure is clear but somewhat formal. The object list, while helpful for LLM context window visibility, creates redundancy that could be trimmed to improve conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (covering safety/idempotency) and the context signal indicating an output schema exists, the description adequately covers what the tool returns (detailed schema with field tables and examples) and what inputs it accepts. For a documentation reference tool of moderate complexity, this is sufficient without over-specifying.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage at the root properties level (per context signal), though the $ref target (object_type) is well-documented in the schema itself. The description compensates by documenting the 'params' wrapper as 'Contains the object type to get schema for' and enumerating all 17 valid object_type values, though it adds no semantic detail beyond the names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] the detailed JSON object schema for Midtrans API request/response objects' with specific details about what it includes (field definitions, types, examples). It distinguishes itself from siblings like midtrans_get_code_example or midtrans_get_charge_example by focusing specifically on 'schema' and 'field definitions' rather than examples or general documentation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description lists the 17 available object types that can be queried, it provides no explicit guidance on when to use this tool versus siblings like midtrans_get_charge_example (full request examples) or midtrans_search_documentation (general doc search). It should explicitly state this is for understanding object structure when constructing API requests.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and idempotentHint=true, the description adds valuable behavioral context: it returns a string containing not just JSON but also a curl command and response format. It clarifies that optional sections can be included/excluded via flags, adding nuance beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring format (Args/Returns) is appropriate for a utility function. The first sentence front-loads the purpose effectively. The Returns section is slightly verbose ('Complete charge request JSON example with curl command and response format') but generally efficient with no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of comprehensive annotations (readOnly, idempotent, destructive flags) and the output schema being a simple string, the description appropriately explains the return value contents (JSON + curl). It adequately covers the single-parameter input structure, though mentioning the available payment method enum values could improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The root 'params' parameter has no schema description (0% coverage per context signals), and the description compensates minimally by noting it 'Contains payment method, and flags for optional sections.' The schema adequately describes the nested properties (include_customer_details, include_item_details), so the description meets baseline expectations without adding significant semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Generate[s] a complete charge request JSON example' with specific mention of ready-to-use bodies and optional sections. It effectively identifies the resource (charge request) and action (generate example), though it could more explicitly differentiate from sibling tool midtrans_get_code_example.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus alternatives like midtrans_get_code_example or midtrans_get_payment_method_guide. The description implies use cases through content (needing charge request examples) but lacks explicit when-to-use or when-not-to-use directives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm readOnly/idempotent/safe operation, so the bar is lower. The description adds valuable context about what gets returned: 'explanations, code examples, request/response formats, and best practices'—detail not present in annotations. No contradictions with safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Purpose → Coverage → Args → Returns sections. Front-loaded with the core action. Slightly redundant in listing all enum values which are also in the schema, but this aids readability. Zero fluff sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a read-only documentation tool. Since output schema exists (Returns section), the description doesn't need to exhaustively explain return values, but commendably describes the content types (examples, formats) anyway. Given single-parameter simplicity and strong annotations, coverage is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage per context signals, the description partially compensates by listing all 16 valid topic values in the prose. However, the 'Args' section merely states params 'contains the topic' without explaining semantics, formats, or validation rules, leaving gaps given the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'detailed documentation for a specific Midtrans Core API topic' with a specific verb+resource. It distinguishes scope implicitly by listing 16 covered topics (overview, authorization, charge, etc.), though it doesn't explicitly differentiate from sibling `midtrans_search_documentation` or `midtrans_get_documentation_map`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. While it lists topics covered, it doesn't clarify whether to use this vs `search_documentation` for keyword searches, or vs `get_code_example` for implementation snippets. Users must infer usage from the topic list alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds valuable behavioral context beyond these annotations by specifying exactly what constitutes the 'guide' (request/response examples, sandbox testing info, flow descriptions), helping the agent understand the richness of the return value without contradicting the safe, read-only nature of the operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a structured docstring format (Args/Returns) that efficiently packs information. The enumeration of 18 supported payment methods, while lengthy, is necessary for correct usage. The first sentence front-loads the core purpose. Only minor redundancy exists between the Args description and the inline list of methods.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (per context signals), the Returns section appropriately summarizes the output format without needing exhaustive detail. The description adequately covers the scope of integration information provided for a documentation retrieval tool, though explicit sibling differentiation would strengthen completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Context signals indicate 0% schema description coverage at the root level. The description compensates by using an Args section to explain that 'params' contains the payment method to query, but offers minimal detail on the nested structure or validation requirements. Given the single parameter and the enum values listed elsewhere in the description, this is minimally sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] a complete integration guide for a specific Midtrans payment method' and enumerates the specific contents included (charge format, response format, flow, etc.). It distinguishes this as a comprehensive guide resource versus siblings like 'get_charge_example' by detailing the broad scope of information returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description helpfully lists all 18 supported payment methods explicitly, which prevents invalid invocations. However, it lacks explicit guidance on when to use this versus siblings like 'midtrans_get_charge_example' or 'midtrans_get_documentation'—it doesn't clarify that this retrieves method-specific integration details while others might retrieve generic examples or full API docs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description adds value by disclosing content richness ('troubleshooting tips', 'common scenarios', 'handling guidance') which describes what kind of practical information is returned beyond raw status codes. However, it omits details about caching, rate limits, or why Midtrans-specific codes differ from standard HTTP specs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured docstring format (Args/Returns) with zero wasted words. Every sentence earns its place: first line establishes purpose, second describes content richness, Args clarifies the single parameter, and Returns describes output format. Appropriate length for a single-parameter lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter and read-only annotations, the description is complete. It compensates for the lack of visible output schema by describing the return value (string table with handling guidance). Could improve by noting this is for debugging/integration help versus runtime transaction monitoring.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage at 0% per context signals, the description compensates adequately by explicitly listing the valid status code ranges (2xx, 3xx, 4xx, or 5xx) in the Args section. It clarifies the parameter structure (params contains the range), though it could enhance further by explaining the semantic meaning of each range (success, redirect, client/server errors).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Midtrans HTTP status code reference' for a specific range, with specific verb (Get) and resource. It distinguishes from sibling documentation tools by focusing specifically on HTTP status codes rather than general documentation or code examples, though it could further clarify 'reference' means lookup documentation versus checking a transaction status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through its scope (HTTP status codes, troubleshooting tips), but provides no explicit guidance on when to use this versus siblings like `midtrans_get_documentation` or `midtrans_get_code_example`. No 'when-not-to-use' or alternative suggestions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations declaring readOnlyHint=true and idempotentHint=true, safety traits are covered. The description adds value by specifying the return format (str with categorized links) and completeness ('complete map'), though it omits details like potential rate limits or freshness of documentation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading: purpose first, usage guidance second, then return details. The Returns block is slightly verbose but appropriately descriptive. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple documentation retrieval tool with minimal inputs and annotations covering safety hints, the description is sufficiently complete. It explains what the map contains (topics, APIs, payment methods) fulfilling informational needs without requiring output schema duplication.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (GetDocumentationMapInput has no properties), and while the description doesn't explicitly describe the empty 'params' wrapper object, the lack of input requirements is somewhat self-evident from the schema structure. It neither compensates for nor contradicts the sparse schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('Midtrans Core API documentation map'), clearly defining the scope as an index of all topics, APIs, and guides. It effectively distinguishes from siblings like midtrans_get_documentation or midtrans_search_documentation by positioning itself as the entry point/navigation aid.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context with 'Use this as your starting point to understand what documentation is available,' establishing when to invoke it (for discovery/overview). While it doesn't explicitly name sibling alternatives, it clearly defines the tool's position in the workflow hierarchy.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by detailing exactly what corpora are searched (topics, guides, schemas, examples), which helps the agent understand result breadth. It also describes the return value as 'Matching documentation sections with relevant content', complementing the existing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Follows a clean docstring structure with summary, scope, usage guidelines, Args, and Returns. Every sentence earns its place: the first sentence defines purpose, the second expands scope, the third gives usage guidance, and the Args/Returns sections are terse. No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter search tool with complete annotations (safety hints) and an existing output schema, the description covers all necessary ground: it explains the search scope, return format, and selection criteria. The Returns section acknowledges the string output type without needing to replicate full schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The Args section states params 'Contains the search query string', which provides minimal semantic meaning beyond the schema. While the schema (via $ref) actually contains rich descriptions and examples for the query parameter, the context signals indicate 0% schema description coverage, suggesting the description should compensate more for parameter structure and constraints (e.g., minLength/maxLength). The description meets baseline but does not richly elaborate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Search' and resource 'Midtrans Core API documentation', then clarifies the scope spanning 'documentation topics, payment method guides, JSON schemas, code examples, and reference materials'. It distinguishes from siblings like midtrans_get_documentation by specifying 'when unsure which topic contains the answer', making it clear this is a broad search vs. specific retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context for when to use the tool: 'when you need to find specific information across multiple documentation sections or when unsure which topic contains the answer'. This effectively guides selection against more specific retrieval tools. However, it does not explicitly name sibling alternatives (e.g., midtrans_get_documentation) that should be used when the topic is known.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotentHint=true and readOnlyHint=true; the description adds valuable context that the returned code includes idempotent processing logic, signature verification, and transaction status handling—disclosing what the generated code actually does beyond the tool's execution safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately sized with clear information hierarchy: purpose statement, feature list, language enumeration, Args/Returns sections. Every sentence conveys distinct information (capabilities, constraints, return format) with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly covers the tool's purpose and return value (complete webhook handler code with specific features). Given annotations cover safety profile and the description explains the code contents, it provides sufficient context for invocation, though it could briefly mention error handling behavior of the tool itself.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite nested schema structure, the description explicitly lists available enum values (python, javascript, go, rust) and clarifies that params contains the language, adding practical guidance beyond the schema references.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with resource 'notification/webhook handler implementation' and clearly distinguishes from siblings like get_code_example and get_charge_example by specifying this is for webhook endpoints with signature verification and transaction status handling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual clues through domain-specific terminology (webhook handler, signature verification) that implicitly signal when to use this (implementing payment notification endpoints), though it lacks explicit 'when not to use' or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only/idempotent safety. The description adds valuable behavioral context beyond annotations: it specifies that returned code includes imports, configuration, API calls, and webhook handlers, and details framework-specific variants (Flask/FastAPI, Express with TypeScript, actix-web/axum).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, includes, languages, args, returns). Front-loaded with the specific action. Every line provides distinct value: the 'Includes' list defines scope, 'Available languages' provides implementation details, and 'Returns' clarifies output format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully complete for a single-parameter retrieval tool. The description adequately covers the input (with framework specifics) and the output (complete code examples structure) without needing to repeat what a structured output schema would provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema coverage noted in context signals, the description compensates by specifying the nested 'language' parameter and—crucially—enumerating available languages with their specific framework implementations (e.g., 'python (requests/Flask/FastAPI)') which is semantic detail absent from the raw enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states the exact action ('Get'), resource ('implementation code examples'), domain ('Midtrans Core API integration'), and constraint ('specific programming language'). The comprehensive list of included examples (charge, refunds, webhooks, subscriptions) clearly distinguishes this from siblings like midtrans_get_charge_example.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on scope by enumerating exactly which implementations are included (charge, cancellation, refunds, webhooks with signature verification, etc.) and which languages/frameworks are supported. Lacks explicit 'when to use vs midtrans_get_charge_example' guidance, but the scope description enables inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/rissets/mcp-midtrans'
If you have feedback or need assistance with the MCP directory API, please join our Discord server