mcp-server
Server Details
Your company's brain for AI agents. Cited, permission-aware knowledge across every system.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Quelvio/quelvio-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.7/5 across 3 of 3 tools scored.
Each tool serves a distinct purpose: query_knowledge for searching, list_domains for exploring available domains, and get_source_detail for retrieving citation provenance. No functional overlap.
All tool names follow a consistent verb_noun pattern in snake_case (get_source_detail, list_domains, query_knowledge), making them predictable and easy to distinguish.
Three tools is an ideal size for this domain—covering search, domain listing, and source verification—without being too sparse or bloated.
The tool set covers the full query-and-verify workflow: explore domains, search, and retrieve provenance. A minor gap is the lack of a tool to list previous queries, but the core lifecycle is complete.
Available Tools
3 toolsget_source_detailARead-onlyIdempotentInspect
Return per-chunk source provenance for a previous query — document path, lifecycle state, embedding timestamp, contributor, last-updated — useful for verifying a citation or surfacing trust signals to a downstream system. Pass a query_id returned by an earlier query_knowledge call. Returns 404 if the query_id is unknown OR belongs to a different tenant (indistinguishable to prevent info-leak). Zero Knowledge Tokens consumed.
| Name | Required | Description | Default |
|---|---|---|---|
| query_id | Yes | UUID returned in the structured-metadata block of a prior `query_knowledge` response. Tenant-scoped — cross-tenant 404. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses beyond annotations: returns 404 for unknown/tenant mismatch to prevent info-leak, and states zero tokens consumed. Annotations already indicate read-only and non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences front-loading the core purpose, with no redundant or vague language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers what the tool returns (list of fields), how to use it (query_id), and error behavior (404). No output schema, but description sufficiently covers typical needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema already describes parameter well. Description adds context about prior call and tenant-scoping, but does not significantly extend schema info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Return') and resource ('per-chunk source provenance'), lists exact fields, and distinguishes from siblings by referencing 'previous query' from query_knowledge.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to pass a query_id from a prior query_knowledge call. Does not mention when not to use or alternatives, but context is clear given sibling tool set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_domainsARead-onlyIdempotentInspect
List the taxonomy domains the company has indexed — with document counts, expert counts, and coverage levels — so an agent can decide whether to query before spending a Knowledge Token. Returns one row per domain with the canonical taxonomy_domain slug, document/chunk counts, expert count, coverage level (expert | partial | none), the single_expert risk flag, and the top contributor by authority. Use the slug as the domain filter on a follow-up query_knowledge call. Zero Knowledge Tokens consumed.
| Name | Required | Description | Default |
|---|---|---|---|
| coverage_filter | No | Optional comma-separated subset of expert,partial,none. Default: all three. Unknown tokens 400. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that the tool is read-only and non-destructive (consistent with annotations), adds details about return fields (document counts, expert counts, coverage levels, risk flag, top contributor), and confirms zero token cost, which is not in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured, front-loading the main purpose, and every sentence provides essential information without redundancy. It is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, good annotations, no output schema), the description fully covers return fields, usage intent, and expected behavior. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides a full description of the coverage_filter parameter, including default and error handling. The tool description repeats this information without adding new semantics, so it meets the baseline but does not exceed it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists taxonomy domains with counts and coverage levels. It differentiates from sibling tools by explicitly mentioning using the slug for follow-up query_knowledge calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly guides the agent to use this tool before spending a Knowledge Token, states zero token consumption, and explains how to use the coverage_filter parameter. Also advises using the domain slug in a follow-up query_knowledge call.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_knowledgeARead-onlyIdempotentInspect
Search the company's connected knowledge across every source — Drive, SharePoint, Confluence, Slack, Notion — with cited answers, lifecycle awareness, and refusal-on-weak-context. Returns ranked chunks with source attribution, authority scores, and coverage level. Use mode=synthesis_lite (Qwen3.5 Flash) or mode=synthesis_pro (Qwen3 Max) for a written answer with [n] citations; use the default standard for a structured chunk list. quick is faster + cheaper, deep is slower + thorough. Synthesis modes consume more Knowledge Tokens than structured modes — pick the cheapest mode that answers the question. Responses are capped at 25,000 tokens per Claude Connectors policy; if the response is truncated, structured metadata carries truncated: true and query_id so the agent can call get_source_detail for full provenance.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | quick | standard | deep | synthesis_lite | synthesis_pro. Defaults to `standard`. Synthesis modes return a written answer with citations; structured modes return chunks only. | |
| query | Yes | Natural-language query (1–2000 characters). Be specific — results are ranked by authority + relevance, not keyword overlap. | |
| domain | No | Optional taxonomy domain filter (e.g. 'engineering.platform'). Use `list_domains` to discover valid values for the tenant. | |
| max_sources | No | Number of source chunks to return (1–20, default 5). The 25K token cap may force fewer results regardless of this value. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, open-world, non-destructive behavior. Description adds transparency about response ranking by authority+relevance, 25K token cap, truncated flag with query_id, and refusal on weak context, without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is information-dense but not overly verbose. Each sentence adds value: purpose, mode options, cost, token cap, and sibling references. Slight room for trimming but well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all necessary context: multi-source search, mode selection, cost, token limits, return values (chunks, citations, truncated flag), and lifecycle awareness. Despite no output schema, description adequately explains expected response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds meaning by explaining mode purposes (written answer vs chunks), cost differences, and token cap interaction with max_sources, enriching parameter semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches company knowledge across multiple sources with cited answers. It distinguishes itself from siblings 'get_source_detail' and 'list_domains' by mentioning they provide full provenance or domain listing, respectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use each mode (synthesis vs standard, quick vs deep), cost considerations, and token cap behavior. Mentions fallback to 'get_source_detail' for truncated responses, aiding correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!