mcp-docs-server
Server Details
AI access to Mapbox docs, API references, style specs, and guides. No token required.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- mapbox/mcp-docs-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsbatch_get_documents_toolBatch Get Mapbox Documents ToolARead-onlyIdempotentInspect
Fetch the full content of multiple Mapbox documentation pages in a single call (max 20). More efficient than calling get_document_tool multiple times. Returns an array of results — failed pages include an error message rather than failing the whole batch.
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | Array of Mapbox documentation page URLs to fetch (max 20). All must be mapbox.com URLs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnly/idempotent/safety properties, the description adds crucial behavioral context about partial failure handling ('failed pages include an error message rather than failing the whole batch') and return format ('Returns an array of results'), which are not disclosed in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences deliver purpose, sibling comparison, and error behavior without redundancy. Front-loaded with the core action ('Fetch...'), efficiently structured so every clause provides distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given rich annotations covering safety/idempotency and complete schema coverage, the description adequately compensates for missing output schema by describing the return structure and partial failure behavior. No additional context needed for this straightforward batch retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Array of Mapbox documentation page URLs...'), the schema fully documents the urls parameter. The description references 'max 20' but does not add semantic meaning, examples, or format details beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Fetch') with clear resource scope ('full content of multiple Mapbox documentation pages') and quantity constraints ('max 20'). It explicitly distinguishes from sibling tool get_document_tool by stating it handles 'multiple' pages vs single calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states efficiency advantage ('More efficient than calling get_document_tool multiple times'), providing clear guidance on when to prefer this tool over its sibling. The max 20 constraint also signals when to avoid this tool (for >20 docs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_document_toolGet Mapbox Document ToolARead-onlyIdempotentInspect
Fetch the full content of a specific Mapbox documentation page by URL. Use this after get_latest_mapbox_docs_tool to follow a link from the index and retrieve the complete page content. For fetching multiple pages at once, use batch_get_documents_tool instead.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of a Mapbox documentation page to fetch. Must be a mapbox.com URL (e.g. https://docs.mapbox.com/api/search/geocoding/). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds workflow sequencing context (use after index retrieval) and scope ('full content') beyond annotations. Annotations already cover safety (readOnly, idempotent, non-destructive), so description appropriately focuses on operational context rather than repeating safety traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: main action front-loaded, followed by workflow context, then sibling alternative. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a simple single-parameter fetch tool. Annotations cover behavioral safety; schema covers inputs. Minor gap regarding return value format (HTML vs markdown), but sufficient given the tool's straightforward purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully documents the URL parameter including format constraints. Description references 'by URL' but does not add semantic detail beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Fetch) + resource (Mapbox documentation page) + scope (full content by URL). Clearly distinguishes from sibling by contrasting single-page vs. batch operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('after get_latest_mapbox_docs_tool to follow a link') and provides the exact alternative tool for different use cases ('For fetching multiple pages at once, use batch_get_documents_tool instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_mapbox_docs_toolSearch Mapbox Docs ToolARead-onlyIdempotentInspect
Search Mapbox documentation by keyword or natural language query. Returns ranked results with titles, URLs, and content excerpts. Use get_document_tool to fetch the full content of a result page.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (1–20, default 5). | |
| query | Yes | Search query for Mapbox documentation (e.g. "add a marker", "camera animation", "geocoding API"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, destructive, idempotent), but description adds valuable behavioral context: 'Returns ranked results with titles, URLs, and content excerpts' explains output structure in lieu of output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with zero redundancy: purpose (sentence 1), return format (sentence 2), sibling reference (sentence 3). Information density is high and immediately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a 2-parameter search tool with simple schema. Description compensates for missing output schema by detailing return structure (titles, URLs, excerpts). Sibling relationship is clear. No significant gaps given the tool's limited complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters well-documented. Description adds semantic value by characterizing the query as accepting 'natural language' (not just keywords), but largely relies on the schema for parameter specifics. Baseline 3 appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies specific verb ('Search'), resource ('Mapbox documentation'), and query types ('keyword or natural language'). Explicitly distinguishes from sibling get_document_tool by stating this returns excerpts while directing users to the sibling for full content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Establishes clear workflow by referencing sibling tool: 'Use get_document_tool to fetch the full content of a result page.' This implies when to use the alternative (for full text vs. search results). However, lacks explicit when-not guidance for batch_get_documents_tool and doesn't state prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!