forkast-mcp-docs
Server Details
MCP server for querying Forkast documentation
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsget_all_docsBInspect
Retrieve every document in the Forkast documentation space
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, yet the description fails to disclose critical behavioral traits for a bulk operation: pagination behavior, potential size/limits, whether full content or metadata is returned, or performance characteristics of retrieving 'every document'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb, zero redundancy. Every word earns its place by establishing scope (Forkast documentation space) and operation (retrieve all).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Functionally complete for the zero-parameter input, but insufficient given the lack of output schema. A bulk retrieval tool should ideally disclose return format (IDs vs. full objects) or warn about potentially large payloads.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline score of 4. The input schema is empty (100% coverage vacuously), requiring no additional parameter clarification in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Retrieve') and resource ('every document in the Forkast documentation space'), but lacks explicit differentiation from siblings like 'get_doc' or the search functions. The scope implies bulk retrieval, but doesn't explicitly contrast with the specific/faceted alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through the scope 'every document,' suggesting unfiltered bulk retrieval. However, lacks explicit when-to-use guidance or warnings about using this vs. filtered alternatives (search_query/search_title) for large spaces.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_docBInspect
Retrieve a single document by its Archbee document ID
| Name | Required | Description | Default |
|---|---|---|---|
| docId | Yes | The Archbee document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. 'Retrieve' implies a read-only operation, but lacks critical behavioral details: does it error if document not found? What format is returned? Are there rate limits? Since no output schema exists, the absence of behavioral context is significant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb, zero redundancy. Efficiently communicates the tool's purpose without filler words. Appropriate length for a single-parameter retrieval operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple retrieval tool with one required parameter, but incomplete given lack of output schema and annotations. Missing: error handling (404 vs null), return structure, or side effects. Sufficient for basic selection but not for confident invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (docId fully described). Description reinforces parameter purpose ('by its Archbee document ID') but adds minimal semantic value beyond the schema. Baseline 3 is appropriate given JSON schema already documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Retrieve') and resource ('single document') with specific scope mechanism ('by its Archbee document ID'). Implicitly distinguishes from siblings (search_query, search_title, get_all_docs) by specifying direct ID lookup, though it could explicitly mention when to prefer this over search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The requirement of 'docId' parameter implies usage (when you have the specific Archbee document ID), but lacks explicit guidance on when to use this versus search_query/search_title for finding documents by content or title. No explicit prerequisites or error conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_queryBInspect
Full-text search across Forkast documentation content
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden but fails to specify return format (excerpts vs full docs), pagination behavior, result ranking, or search syntax support (wildcards, phrases). It also doesn't clarify if the operation is read-only, though implied by 'search'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient seven-word sentence with no filler. It front-loads the key information (full-text search) and avoids redundancy with the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter) and complete schema documentation, the description is minimally adequate. However, for a search tool lacking an output schema, it could benefit from mentioning what results contain (e.g., snippets, relevance scores) or any query syntax limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the single parameter 'query' already described as 'Search query text'. The description adds no additional parameter semantics beyond the schema, which is acceptable given the high schema coverage and simple parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (full-text search), resource (Forkast documentation), and scope (content). However, it does not explicitly differentiate from the sibling tool 'search_title', which presumably performs title-only searches, though 'full-text' implies broader scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like 'search_title' (likely title-only) or 'get_doc' (direct retrieval). The description states what the tool does but not when to prefer it over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_titleAInspect
Search Forkast documentation by document title
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Title search text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify matching behavior (partial vs exact), case sensitivity, or return format (whether it returns document IDs, titles, or full content).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence with no redundant words, front-loading the key information immediately and avoiding tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with complete schema coverage, the description adequately covers intent but lacks information about return values (critical given no output schema exists) and search result limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with 'Title search text' describing the query parameter. The description reinforces that this is a title search but does not add syntax details, formatting rules, or examples beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action (Search), resource (Forkast documentation), and specific scope (by document title), effectively distinguishing it from the sibling 'search_query' tool which implies general content searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'by document title' implicitly suggests when to use this tool (for title-specific searches), it does not explicitly contrast with siblings like 'search_query' or 'get_doc' regarding when to prefer each option.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!