ColdState Knowledge Search
Server Details
**ColdState Knowledge Search MCP Server** https://github.com/daniel-coldstate/coldstate-mcp Semantic search over 64.6M knowledge entries — the structured alternative to web search APIs and web scraping for LLM agents. No crawling, no rate limits, sub-3s responses. Cloud-hosted at services.coldstate.ai
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolscoldstate_browse_documentsARead-onlyIdempotentInspect
Browse documents in a ColdState index. Returns titles, snippets, content, state classification, and E-scores.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max documents to return | |
| offset | No | Offset for pagination | |
| index_id | Yes | Index ID, e.g. idx_... |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint/idempotentHint, so the description adds valuable behavioral context by detailing the return payload structure (titles, snippets, content, state classification, E-scores). It does not contradict annotations. It could improve by mentioning pagination behavior or rate limit implications of large offsets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes the action and scope, second details the return values. Every word earns its place and the description is appropriately front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter structure, good annotations, and lack of output schema, the description adequately compensates by detailing the return fields. It appropriately omits redundant pagination explanations (covered by schema) but could mention whether browsing is ordered or filtered by default.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full documentation for index_id, limit, and offset. The description references 'ColdState index' which maps to the index_id parameter, but otherwise relies on the schema for parameter semantics, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'browse' with clear resource 'documents' and scope 'ColdState index'. The verb choice implicitly distinguishes from sibling 'search' tools, and listing specific return fields (titles, snippets, E-scores) clarifies exactly what data is accessed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The verb 'browse' provides implied differentiation from sibling 'search' tools (coldstate_search, coldstate_search_global), suggesting use for exploration/listing versus querying. However, it lacks explicit when-to-use guidance or stated prerequisites (e.g., needing a valid index_id from list_indexes first).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coldstate_domainsARead-onlyIdempotentInspect
List all available knowledge domains in ColdState's global knowledge base with entry counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds that the tool returns 'entry counts' alongside domains, which provides useful payload context not in annotations. However, it omits details about return format, pagination, or what constitutes an 'entry'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb ('List'), followed by resource identification, scope qualifier ('ColdState's global knowledge base'), and key behavioral detail ('with entry counts'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and read-only annotations covering safety profile, the description adequately covers the tool's purpose and key return characteristic (entry counts). Lacks output schema specification, but 'entry counts' provides sufficient hint for a simple listing operation of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (empty object), establishing baseline 4 per rubric. The description correctly omits parameter discussion since none exist, and requires no compensation for missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' with clear resource 'knowledge domains in ColdState's global knowledge base' and distinguishes from siblings by focusing on 'domains' versus documents, indexes, or search operations. The addition of 'with entry counts' specifies the scope of returned data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage (retrieving domain inventory with counts) but provides no explicit guidance on when to use this versus siblings like coldstate_search or coldstate_browse_documents. No 'when-not' or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coldstate_explainARead-onlyIdempotentInspect
Explain the scoring breakdown for a specific document against a query. Shows token-level TF-IDF scores, match ratio, and exact-match bonus calculation.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The search query to explain against | |
| doc_id | Yes | Document ID, e.g. "doc_42" or "42" | |
| index_id | Yes | Index ID, e.g. idx_... |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying exactly what the explanation contains (TF-IDF scores, match ratio calculations, exact-match bonuses), which helps the agent understand the return value structure despite no output schema being present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero waste. The first sentence front-loads the core purpose, while the second enumerates specific calculation components. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and clear annotations, the description adequately covers the tool's purpose. It compensates for the missing output schema by detailing what scoring components will be explained (TF-IDF, ratios, bonuses). Minor gap: doesn't mention this is typically used after a search to debug rankings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all three parameters (index_id, query, doc_id). The description mentions 'specific document' and 'query' but adds no semantic details, validation rules, or format guidance beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (explain), resource (scoring breakdown), and specific components shown (token-level TF-IDF scores, match ratio, exact-match bonus). It effectively distinguishes this from sibling tools like coldstate_search by specifying this is for score explanation, not document retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it doesn't explicitly name sibling alternatives, the description provides clear context that this tool is for debugging/explaining relevance scores ('scoring breakdown'), implying use when analyzing why a document matched rather than searching. No explicit exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coldstate_list_indexesARead-onlyIdempotentInspect
List all your ColdState indexes with their status, mode, document count, and domain preset.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only/idempotent safety profile. Description adds valuable behavioral context by disclosing return payload structure (status, mode, document count, domain preset), which compensates for missing output schema. Does not mention pagination, rate limits, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with front-loaded verb. Every clause earns its place: 'all your' establishes scope, and the four field specifications provide necessary return-value documentation without redundancy. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read operation. Field enumeration provides sufficient compensation for missing output schema. Annotations cover safety properties. Minor gap: does not specify return type structure (array vs object) or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present. Per scoring rules, 0 params = baseline 4. No parameter semantic enrichment required or possible.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('ColdState indexes') with specific scope ('all'). Enumerating return fields (status, mode, document count, domain preset) implicitly distinguishes this metadata listing from document-oriented siblings like search/browse, though explicit differentiation from coldstate_domains is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or alternative recommendations provided. Given siblings include coldstate_domains and various search tools, the description misses opportunity to clarify whether to use this for inventory vs. domain management or when to prefer search over listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coldstate_searchARead-onlyIdempotentInspect
Search a ColdState index by collection name or index ID. Returns ranked results with E-scores and state classification (CRYSTALLINE/FLUID/REACTIVE). Provide exactly one of collection or index_id.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return | |
| query | Yes | The search query | |
| offset | No | Offset for pagination | |
| index_id | No | Index ID to search, e.g. idx_... (mutually exclusive with collection) | |
| collection | No | Collection name to search (mutually exclusive with index_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent status, so description appropriately focuses on return value characteristics instead. It discloses that results include 'E-scores and state classification (CRYSTALLINE/FLUID/REACTIVE)'—critical context absent from both annotations and the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads purpose and return format; second states the critical mutual exclusivity constraint. No redundant words or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema provided, the description compensates by detailing the return value structure (E-scores, classifications). Combined with 100% schema coverage and clear behavioral annotations, this provides sufficient context for tool invocation, though it could note pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage, the description adds crucial constraint logic: 'Provide exactly one' clarifies that these mutually exclusive parameters are collectively required (despite neither being in schema's required array), and emphasizes the alternative identification methods (name vs ID).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Search' + resource 'ColdState index' with clear scope limitation to a specific collection/index. The mention of 'by collection name or index ID' effectively distinguishes this from the sibling coldstate_search_global tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context that this targets a specific index (vs global), but does not explicitly name sibling alternatives. The constraint 'Provide exactly one of collection or index_id' provides essential usage instruction for the mutual exclusivity requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coldstate_search_globalARead-onlyIdempotentInspect
Search ColdState's global knowledge base (64M+ entries across 35+ domains including SCIENCE, MEDICINE, TECHNOLOGY, HISTORY, etc). Returns ranked results with E-scores, QST semantic topology scoring, and state classification. Optionally filter by domain.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return | |
| query | Yes | The search query | |
| domain | No | Filter by knowledge domain (e.g. MEDICINE, SCIENCE, TECHNOLOGY, HISTORY, LAW, CODE). Case-insensitive. | |
| offset | No | Offset for pagination |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety profile, freeing the description to focus on return value semantics. Description adds valuable behavioral context about response structure (E-scores, QST semantic topology scoring, state classification) and dataset scale that helps set result quality expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. First sentence front-loads action, scope, and return value types; second sentence covers optional filtering. Zero redundant information despite conveying scale metrics and technical scoring details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, description adequately explains return value characteristics (scoring methodologies). With 100% input schema coverage and annotations handling safety profile, the description provides sufficient context for agent invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing complete parameter documentation. Description adds minimal semantic value beyond schema ('Optionally filter by domain'), which is appropriate given the schema already fully defines query, limit, offset, and domain parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches ColdState's 'global knowledge base' with specific scope (64M+ entries, 35+ domains), distinguishing it from sibling 'coldstate_search'. It specifies unique return value characteristics (E-scores, QST semantic topology scoring, state classification) that differentiate it from 'coldstate_browse_documents'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through scale description ('64M+ entries') and 'global' scope, suggesting use for broad cross-domain searches. However, lacks explicit when-to-use guidance versus siblings like 'coldstate_search' or 'coldstate_browse_documents', and does not mention when domain filtering is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!