docs-mcp
Server Details
Remote MCP server for Tandem docs, install guides, SDKs, workflows, and agent setup help.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- frumu-ai/tandem
- GitHub Stars
- 87
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 13 of 13 tools scored.
Most tools have distinct purposes, but there is some overlap between refresh and compare functions. For example, refresh_doc_page and compare_doc_page_refresh both handle page refreshes, which could cause confusion. However, their descriptions clarify differences in output (e.g., hashes vs. source reports), preventing major misselection.
Tool names follow a consistent snake_case pattern with clear verb_noun structures, such as get_doc, search_docs, and invalidate_docs_cache. This predictability makes it easy for agents to understand and use the toolset without confusion over naming conventions.
With 13 tools, the count is well-scoped for a documentation server, covering key operations like fetching, searching, caching, and guiding users. Each tool serves a specific function without redundancy, making the set comprehensive yet manageable for its purpose.
The toolset provides complete coverage for documentation management, including CRUD-like operations (e.g., get, search, refresh), caching control, and user guidance. There are no obvious gaps; agents can navigate, update, and optimize docs effectively without dead ends.
Available Tools
13 toolsanswer_how_toARead-onlyIdempotentInspect
Answer how to install or use Tandem, build workflows, or use the SDKs by synthesizing relevant published docs.
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | The Tandem task or question that needs a docs-grounded answer. | |
| limit | No | Maximum number of supporting docs recommendations to include. | |
| audience | No | Optional audience hint such as developer, desktop user, or operator. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| engine_version | No | Optional Tandem engine version hint. Prefer stable docs for installed engines. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral detail ('synthesizing') beyond annotations, indicating it generates composite answers rather than raw doc retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence with no filler; front-loaded with action verb and immediately scoping the use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for tool complexity; mentions synthesis behavior, and while output schema is absent, the 'limit' parameter hints at the recommendation structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description adds no parameter details, but schema has 100% coverage with clear descriptions, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Answer') and resource scope ('how to install or use Tandem...'), with 'synthesizing' distinguishing it from simple retrieval siblings like search_docs or get_doc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for how-to questions but lacks explicit when/when-not guidance or comparison to sibling doc tools (e.g., search_docs vs. synthesized answer).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_doc_page_refreshAIdempotentInspect
Force-refresh a docs page and return before/after hashes so page content changes can be proven.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Docs path like /start-here/ or /sdk/python/ to compare before and after refresh. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context about returned hashes not found in annotations, though could mention idempotent nature explicitly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence front-loads action, mechanism, and purpose with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by specifying the hash return values; appropriately scoped for single-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear examples; description provides baseline context without adding redundant parameter detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific action (force-refresh) and distinct output (before/after hashes) clearly differentiate it from sibling refresh_doc_page and compare_docs_index_refresh.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use case ('changes can be proven') but lacks explicit when/when-not guidance versus simple refresh_doc_page alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_docs_index_refreshAIdempotentInspect
Force-refresh the docs indexes and return before/after hashes so content changes can be proven.
| Name | Required | Description | Default |
|---|---|---|---|
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral detail not in annotations: returns 'before/after hashes' and explains the 'prove' use case, without contradicting idempotent/destructive hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence front-loaded with action, mechanism, and purpose; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by explicitly documenting return values (before/after hashes), though could briefly clarify what 'docs indexes' refers to.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline score of 4 per rubric.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Force-refresh') + resource ('docs indexes') and distinguishes from sibling 'refresh_docs_index' by emphasizing the comparison/hashing functionality ('before/after hashes', 'changes can be proven').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies when to use via 'so content changes can be proven' (use when verification needed), but lacks explicit contrast with 'refresh_docs_index' alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_docARead-onlyIdempotentInspect
Fetch and normalize a published Tandem docs page by path or URL.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Docs path like /sdk/python/ or a full Tandem docs URL. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| engine_version | No | Optional Tandem engine version hint for compatibility checks. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds key behavioral details absent from annotations: 'normalize' indicates content processing/transformation, and 'published' clarifies scope (excludes drafts).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence, front-loaded with action verbs, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for low complexity (single param, no output schema); mentions normalization hinting at return format without requiring explicit output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Mirrors the schema's 'path or URL' explanation; with 100% schema coverage, adds no new parameter semantics beyond baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verbs ('Fetch and normalize') and resource ('published Tandem docs page'), with implicit distinction from sibling refresh/search tools via 'Fetch' and 'by path or URL'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs. search_docs, refresh_doc_page, or get_tandem_guide; purely descriptive statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_docs_cache_statusARead-onlyIdempotentInspect
Inspect the current docs cache state for index sources and warmed pages.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations by specifying what cache components are inspected (index sources and warmed pages), though could mention return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence, front-loaded with action verb, no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple tool but lacks output description (no output schema exists); mentions inspected entities but not what status information is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Baseline score for zero-parameter tool; no parameters exist requiring semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Inspect') and resource ('docs cache state'), with specific detail ('index sources and warmed pages') that distinguishes it from action-oriented siblings like refresh_docs_index or invalidate_docs_cache.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives (e.g., before invalidating cache, after warming) or workflow integration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_start_pathARead-onlyIdempotentInspect
Recommend the best Tandem docs starting path for a desktop user, server admin, or developer.
| Name | Required | Description | Default |
|---|---|---|---|
| role | Yes | The user role to optimize the docs starting point for. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent traits; description adds no behavioral details about return format or caching but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single front-loaded sentence with no redundant words; every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for low complexity (1 param, no output schema); mentions 'path' implying return value, though format specifics (URL vs ID) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Lists the three enum values (desktop, server_admin, developer) reinforcing the schema, but schema already has 100% coverage so this adds minimal semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action (recommend) and resource (Tandem docs starting path), with implicit distinction from siblings like recommend_next_docs via 'starting' vs 'next'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (when user needs a starting point) by listing roles, but lacks explicit when-not guidance or comparison to alternatives like search_docs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tandem_guideBRead-onlyIdempotentInspect
Return a structured Tandem learning guide for install-engine, python-sdk, typescript-sdk, build-workflows, or headless-deploy.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | Named Tandem learning guide to return. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Mentions 'structured' return type but adds no behavioral context beyond annotations (read-only, idempotent) regarding caching, auth, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently front-loads the action and valid options; appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple one-parameter tool; compensates for missing output schema by stating it returns a 'structured' guide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, meets baseline by repeating enum values already defined in schema without adding semantic nuance about the topics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a structured Tandem learning guide and enumerates the five valid topics, distinguishing it from general doc retrieval siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus siblings like get_doc, answer_how_to, or recommend_next_docs; only lists valid parameter values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
invalidate_docs_cacheAIdempotentInspect
Invalidate docs cache entries for index data, all pages, or a specific page path.
| Name | Required | Description | Default |
|---|---|---|---|
| path | No | Required when target is page; identifies the specific docs path to invalidate. | |
| target | No | Which cache area to invalidate. | |
| docs_ref | No | Optional docs ref to scope page/index cache invalidation. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds scope context beyond annotations but omits behavioral implications (e.g., cache becomes cold, subsequent requests slower).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence front-loaded with verb; no redundancy given 100% schema coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple 2-parameter tool without output schema; relies appropriately on annotations for idempotency/destructive hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Maps abstract enum values to concrete concepts (index data/all pages/specific path) beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the specific action (invalidate) and targets (index, all pages, specific path), distinguishing from 'refresh' siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus refresh_doc_page/refresh_docs_index or when invalidation is preferred over refreshing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_next_docsARead-onlyIdempotentInspect
Recommend the next Tandem docs to read after a current page, optionally biased toward a goal.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Optional goal such as install Tandem, use the Python SDK, or build workflows. | |
| limit | No | Maximum number of next-doc recommendations to return. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| current_path | Yes | Current Tandem docs path the user has already read. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds context about goal biasing beyond annotations, but doesn't elaborate on recommendation logic, caching, or statelessness that annotations already cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded, no filler—every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for input understanding but omits return value description despite having no output schema; misses opportunity to explain recommendation format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Reinforces semantics of current_path and goal, but with 100% schema coverage this meets baseline without adding significant new parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (Recommend) and resource (next Tandem docs), clearly distinguishes from search/get siblings by emphasizing 'next' and 'after a current page'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies sequential usage ('after a current page') but provides no explicit when/when-not guidance or comparison to alternatives like search_docs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
refresh_doc_pageAIdempotentInspect
Force-refresh a docs page and report whether content came from the network, cache, or stale fallback.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Docs path like /start-here/ or /sdk/python/ to refresh. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| force_refresh | No | Whether to bypass cache and force a network refresh attempt. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral details beyond annotations: specifically discloses the tri-state reporting (network/cache/stale fallback) and implies fallback behavior on failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 16-word sentence with front-loaded action verb; zero redundancy, every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by specifying the return value semantics (source reporting), though could briefly note it affects only the targeted page.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage; description meets baseline by not repeating parameter details but offers no additional semantic context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb phrase 'Force-refresh' plus resource 'docs page' clearly distinguishes from sibling tools like refresh_docs_index and invalidate_docs_cache.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'Force-refresh' (when fresh data needed), but lacks explicit when/when-not guidance comparing to get_doc or invalidate_docs_cache.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
refresh_docs_indexAIdempotentInspect
Force-refresh docs indexes and report whether data came from the network, cache, or stale fallback.
| Name | Required | Description | Default |
|---|---|---|---|
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| force_refresh | No | Whether to bypass cache and force a network refresh attempt. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral detail beyond annotations by disclosing the tri-state data source reporting (network/cache/stale fallback).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence (13 words) front-loaded with action; every clause earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by describing return value (source reporting); adequate for tool complexity though could specify 'search index' explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage; description meets baseline by implying the 'force' nature but doesn't elaborately expand on parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource ('Force-refresh docs indexes') clearly distinguishes from sibling 'refresh_doc_page' (singular page) and cache invalidation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context via reporting capability but lacks explicit when/when-not guidance versus alternatives like 'invalidate_docs_cache'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_docsBRead-onlyIdempotentInspect
Search the published Tandem docs index for relevant pages.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of matching docs pages to return. | |
| query | Yes | The Tandem question, task, or keyword query to search for. | |
| audience | No | Optional audience filter such as developer, desktop user, or operator. | |
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. Use next, refs/<sha>, or releases/<version> only when explicitly needed. | |
| engine_version | No | Optional Tandem engine version hint. If supplied without docs_ref, the MCP still defaults to stable unless a matching versioned docs ref is configured. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds 'published' (data scope) and implies relevance ranking ('relevant pages'), but misses what gets returned and search coverage beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (10 words), front-loaded with action verb, though borderline too terse for distinguishing nuances.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple 3-parameter tool, but fails to compensate for missing output schema by not describing return format (titles, URLs, snippets).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear descriptions; tool description adds no parameter guidance but baseline is 3 given complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Search') and resource ('published Tandem docs index'), though could better differentiate from get_doc (retrieval by ID) and recommend_next_docs (suggestions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs siblings like get_doc (direct retrieval) or recommend_next_docs (browsing), nor when search is preferable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
warmup_docs_cacheAIdempotentInspect
Refresh the docs index and preload the most important Tandem docs pages into cache.
| Name | Required | Description | Default |
|---|---|---|---|
| docs_ref | No | Docs channel or immutable ref. Defaults to stable. | |
| force_refresh | No | Whether to bypass cache and force fresh upstream fetches first. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations by specifying that 'most important' pages are preloaded, hinting at prioritization logic not captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with active verbs, no fluff—every word conveys essential functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the tool's complexity (single optional param, no output schema); covers the core caching behavior sufficiently without requiring return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3; description adds no parameter context but doesn't need to since force_refresh is self-explanatory in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific actions (refresh index, preload important pages) and distinguishes from siblings like refresh_docs_index by emphasizing preloading of content, not just index updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'warmup' and 'preload' terminology but lacks explicit when/when-not guidance contrasting with alternatives like refresh_docs_index or invalidate_docs_cache.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!