Skip to main content
Glama

Server Details

MCP server for querying Forkast documentation

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
get_all_docsBInspect

Retrieve every document in the Forkast documentation space

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, yet the description fails to disclose critical behavioral traits for a bulk operation: pagination behavior, potential size/limits, whether full content or metadata is returned, or performance characteristics of retrieving 'every document'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action verb, zero redundancy. Every word earns its place by establishing scope (Forkast documentation space) and operation (retrieve all).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Functionally complete for the zero-parameter input, but insufficient given the lack of output schema. A bulk retrieval tool should ideally disclose return format (IDs vs. full objects) or warn about potentially large payloads.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, meeting the baseline score of 4. The input schema is empty (100% coverage vacuously), requiring no additional parameter clarification in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Retrieve') and resource ('every document in the Forkast documentation space'), but lacks explicit differentiation from siblings like 'get_doc' or the search functions. The scope implies bulk retrieval, but doesn't explicitly contrast with the specific/faceted alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage guidance through the scope 'every document,' suggesting unfiltered bulk retrieval. However, lacks explicit when-to-use guidance or warnings about using this vs. filtered alternatives (search_query/search_title) for large spaces.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_docBInspect

Retrieve a single document by its Archbee document ID

ParametersJSON Schema
NameRequiredDescriptionDefault
docIdYesThe Archbee document ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. 'Retrieve' implies a read-only operation, but lacks critical behavioral details: does it error if document not found? What format is returned? Are there rate limits? Since no output schema exists, the absence of behavioral context is significant.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb, zero redundancy. Efficiently communicates the tool's purpose without filler words. Appropriate length for a single-parameter retrieval operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple retrieval tool with one required parameter, but incomplete given lack of output schema and annotations. Missing: error handling (404 vs null), return structure, or side effects. Sufficient for basic selection but not for confident invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (docId fully described). Description reinforces parameter purpose ('by its Archbee document ID') but adds minimal semantic value beyond the schema. Baseline 3 is appropriate given JSON schema already documents the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Retrieve') and resource ('single document') with specific scope mechanism ('by its Archbee document ID'). Implicitly distinguishes from siblings (search_query, search_title, get_all_docs) by specifying direct ID lookup, though it could explicitly mention when to prefer this over search operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The requirement of 'docId' parameter implies usage (when you have the specific Archbee document ID), but lacks explicit guidance on when to use this versus search_query/search_title for finding documents by content or title. No explicit prerequisites or error conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_queryBInspect

Full-text search across Forkast documentation content

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query text
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden but fails to specify return format (excerpts vs full docs), pagination behavior, result ranking, or search syntax support (wildcards, phrases). It also doesn't clarify if the operation is read-only, though implied by 'search'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient seven-word sentence with no filler. It front-loads the key information (full-text search) and avoids redundancy with the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter) and complete schema documentation, the description is minimally adequate. However, for a search tool lacking an output schema, it could benefit from mentioning what results contain (e.g., snippets, relevance scores) or any query syntax limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the single parameter 'query' already described as 'Search query text'. The description adds no additional parameter semantics beyond the schema, which is acceptable given the high schema coverage and simple parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (full-text search), resource (Forkast documentation), and scope (content). However, it does not explicitly differentiate from the sibling tool 'search_title', which presumably performs title-only searches, though 'full-text' implies broader scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like 'search_title' (likely title-only) or 'get_doc' (direct retrieval). The description states what the tool does but not when to prefer it over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_titleAInspect

Search Forkast documentation by document title

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesTitle search text
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify matching behavior (partial vs exact), case sensitivity, or return format (whether it returns document IDs, titles, or full content).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence with no redundant words, front-loading the key information immediately and avoiding tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema coverage, the description adequately covers intent but lacks information about return values (critical given no output schema exists) and search result limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with 'Title search text' describing the query parameter. The description reinforces that this is a title search but does not add syntax details, formatting rules, or examples beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (Search), resource (Forkast documentation), and specific scope (by document title), effectively distinguishing it from the sibling 'search_query' tool which implies general content searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While 'by document title' implicitly suggests when to use this tool (for title-specific searches), it does not explicitly contrast with siblings like 'search_query' or 'get_doc' regarding when to prefer each option.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources