Skip to main content
Glama
Ownership verified

Server Details

**ColdState Knowledge Search MCP Server** https://github.com/daniel-coldstate/coldstate-mcp Semantic search over 64.6M knowledge entries — the structured alternative to web search APIs and web scraping for LLM agents. No crawling, no rate limits, sub-3s responses. Cloud-hosted at services.coldstate.ai

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
coldstate_browse_documentsA
Read-onlyIdempotent
Inspect

Browse documents in a ColdState index. Returns titles, snippets, content, state classification, and E-scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax documents to return
offsetNoOffset for pagination
index_idYesIndex ID, e.g. idx_...
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint/idempotentHint, so the description adds valuable behavioral context by detailing the return payload structure (titles, snippets, content, state classification, E-scores). It does not contradict annotations. It could improve by mentioning pagination behavior or rate limit implications of large offsets.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the action and scope, second details the return values. Every word earns its place and the description is appropriately front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter structure, good annotations, and lack of output schema, the description adequately compensates by detailing the return fields. It appropriately omits redundant pagination explanations (covered by schema) but could mention whether browsing is ordered or filtered by default.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing full documentation for index_id, limit, and offset. The description references 'ColdState index' which maps to the index_id parameter, but otherwise relies on the schema for parameter semantics, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'browse' with clear resource 'documents' and scope 'ColdState index'. The verb choice implicitly distinguishes from sibling 'search' tools, and listing specific return fields (titles, snippets, E-scores) clarifies exactly what data is accessed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The verb 'browse' provides implied differentiation from sibling 'search' tools (coldstate_search, coldstate_search_global), suggesting use for exploration/listing versus querying. However, it lacks explicit when-to-use guidance or stated prerequisites (e.g., needing a valid index_id from list_indexes first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coldstate_domainsA
Read-onlyIdempotent
Inspect

List all available knowledge domains in ColdState's global knowledge base with entry counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds that the tool returns 'entry counts' alongside domains, which provides useful payload context not in annotations. However, it omits details about return format, pagination, or what constitutes an 'entry'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action verb ('List'), followed by resource identification, scope qualifier ('ColdState's global knowledge base'), and key behavioral detail ('with entry counts'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and read-only annotations covering safety profile, the description adequately covers the tool's purpose and key return characteristic (entry counts). Lacks output schema specification, but 'entry counts' provides sufficient hint for a simple listing operation of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters (empty object), establishing baseline 4 per rubric. The description correctly omits parameter discussion since none exist, and requires no compensation for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'List' with clear resource 'knowledge domains in ColdState's global knowledge base' and distinguishes from siblings by focusing on 'domains' versus documents, indexes, or search operations. The addition of 'with entry counts' specifies the scope of returned data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage (retrieving domain inventory with counts) but provides no explicit guidance on when to use this versus siblings like coldstate_search or coldstate_browse_documents. No 'when-not' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coldstate_explainA
Read-onlyIdempotent
Inspect

Explain the scoring breakdown for a specific document against a query. Shows token-level TF-IDF scores, match ratio, and exact-match bonus calculation.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesThe search query to explain against
doc_idYesDocument ID, e.g. "doc_42" or "42"
index_idYesIndex ID, e.g. idx_...
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying exactly what the explanation contains (TF-IDF scores, match ratio calculations, exact-match bonuses), which helps the agent understand the return value structure despite no output schema being present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste. The first sentence front-loads the core purpose, while the second enumerates specific calculation components. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear annotations, the description adequately covers the tool's purpose. It compensates for the missing output schema by detailing what scoring components will be explained (TF-IDF, ratios, bonuses). Minor gap: doesn't mention this is typically used after a search to debug rankings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all three parameters (index_id, query, doc_id). The description mentions 'specific document' and 'query' but adds no semantic details, validation rules, or format guidance beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (explain), resource (scoring breakdown), and specific components shown (token-level TF-IDF scores, match ratio, exact-match bonus). It effectively distinguishes this from sibling tools like coldstate_search by specifying this is for score explanation, not document retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it doesn't explicitly name sibling alternatives, the description provides clear context that this tool is for debugging/explaining relevance scores ('scoring breakdown'), implying use when analyzing why a document matched rather than searching. No explicit exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coldstate_list_indexesA
Read-onlyIdempotent
Inspect

List all your ColdState indexes with their status, mode, document count, and domain preset.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety profile. Description adds valuable behavioral context by disclosing return payload structure (status, mode, document count, domain preset), which compensates for missing output schema. Does not mention pagination, rate limits, or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with front-loaded verb. Every clause earns its place: 'all your' establishes scope, and the four field specifications provide necessary return-value documentation without redundancy. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter read operation. Field enumeration provides sufficient compensation for missing output schema. Annotations cover safety properties. Minor gap: does not specify return type structure (array vs object) or pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present. Per scoring rules, 0 params = baseline 4. No parameter semantic enrichment required or possible.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List') and resource ('ColdState indexes') with specific scope ('all'). Enumerating return fields (status, mode, document count, domain preset) implicitly distinguishes this metadata listing from document-oriented siblings like search/browse, though explicit differentiation from coldstate_domains is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or alternative recommendations provided. Given siblings include coldstate_domains and various search tools, the description misses opportunity to clarify whether to use this for inventory vs. domain management or when to prefer search over listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coldstate_search_globalA
Read-onlyIdempotent
Inspect

Search ColdState's global knowledge base (64M+ entries across 35+ domains including SCIENCE, MEDICINE, TECHNOLOGY, HISTORY, etc). Returns ranked results with E-scores, QST semantic topology scoring, and state classification. Optionally filter by domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return
queryYesThe search query
domainNoFilter by knowledge domain (e.g. MEDICINE, SCIENCE, TECHNOLOGY, HISTORY, LAW, CODE). Case-insensitive.
offsetNoOffset for pagination
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety profile, freeing the description to focus on return value semantics. Description adds valuable behavioral context about response structure (E-scores, QST semantic topology scoring, state classification) and dataset scale that helps set result quality expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. First sentence front-loads action, scope, and return value types; second sentence covers optional filtering. Zero redundant information despite conveying scale metrics and technical scoring details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, description adequately explains return value characteristics (scoring methodologies). With 100% input schema coverage and annotations handling safety profile, the description provides sufficient context for agent invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete parameter documentation. Description adds minimal semantic value beyond schema ('Optionally filter by domain'), which is appropriate given the schema already fully defines query, limit, offset, and domain parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool searches ColdState's 'global knowledge base' with specific scope (64M+ entries, 35+ domains), distinguishing it from sibling 'coldstate_search'. It specifies unique return value characteristics (E-scores, QST semantic topology scoring, state classification) that differentiate it from 'coldstate_browse_documents'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through scale description ('64M+ entries') and 'global' scope, suggesting use for broad cross-domain searches. However, lacks explicit when-to-use guidance versus siblings like 'coldstate_search' or 'coldstate_browse_documents', and does not mention when domain filtering is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources