Skip to main content
Glama

mcp-server

Server Details

Your company's brain for AI agents. Cited, permission-aware knowledge across every system.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Quelvio/quelvio-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.7/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a distinct purpose: query_knowledge for searching, list_domains for exploring available domains, and get_source_detail for retrieving citation provenance. No functional overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (get_source_detail, list_domains, query_knowledge), making them predictable and easy to distinguish.

Tool Count5/5

Three tools is an ideal size for this domain—covering search, domain listing, and source verification—without being too sparse or bloated.

Completeness4/5

The tool set covers the full query-and-verify workflow: explore domains, search, and retrieve provenance. A minor gap is the lack of a tool to list previous queries, but the core lifecycle is complete.

Available Tools

3 tools
get_source_detailA
Read-onlyIdempotent
Inspect

Return per-chunk source provenance for a previous query — document path, lifecycle state, embedding timestamp, contributor, last-updated — useful for verifying a citation or surfacing trust signals to a downstream system. Pass a query_id returned by an earlier query_knowledge call. Returns 404 if the query_id is unknown OR belongs to a different tenant (indistinguishable to prevent info-leak). Zero Knowledge Tokens consumed.

ParametersJSON Schema
NameRequiredDescriptionDefault
query_idYesUUID returned in the structured-metadata block of a prior `query_knowledge` response. Tenant-scoped — cross-tenant 404.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses beyond annotations: returns 404 for unknown/tenant mismatch to prevent info-leak, and states zero tokens consumed. Annotations already indicate read-only and non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences front-loading the core purpose, with no redundant or vague language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers what the tool returns (list of fields), how to use it (query_id), and error behavior (404). No output schema, but description sufficiently covers typical needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and schema already describes parameter well. Description adds context about prior call and tenant-scoping, but does not significantly extend schema info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Return') and resource ('per-chunk source provenance'), lists exact fields, and distinguishes from siblings by referencing 'previous query' from query_knowledge.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to pass a query_id from a prior query_knowledge call. Does not mention when not to use or alternatives, but context is clear given sibling tool set.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_domainsA
Read-onlyIdempotent
Inspect

List the taxonomy domains the company has indexed — with document counts, expert counts, and coverage levels — so an agent can decide whether to query before spending a Knowledge Token. Returns one row per domain with the canonical taxonomy_domain slug, document/chunk counts, expert count, coverage level (expert | partial | none), the single_expert risk flag, and the top contributor by authority. Use the slug as the domain filter on a follow-up query_knowledge call. Zero Knowledge Tokens consumed.

ParametersJSON Schema
NameRequiredDescriptionDefault
coverage_filterNoOptional comma-separated subset of expert,partial,none. Default: all three. Unknown tokens 400.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that the tool is read-only and non-destructive (consistent with annotations), adds details about return fields (document counts, expert counts, coverage levels, risk flag, top contributor), and confirms zero token cost, which is not in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured, front-loading the main purpose, and every sentence provides essential information without redundancy. It is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, good annotations, no output schema), the description fully covers return fields, usage intent, and expected behavior. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides a full description of the coverage_filter parameter, including default and error handling. The tool description repeats this information without adding new semantics, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists taxonomy domains with counts and coverage levels. It differentiates from sibling tools by explicitly mentioning using the slug for follow-up query_knowledge calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly guides the agent to use this tool before spending a Knowledge Token, states zero token consumption, and explains how to use the coverage_filter parameter. Also advises using the domain slug in a follow-up query_knowledge call.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_knowledgeA
Read-onlyIdempotent
Inspect

Search the company's connected knowledge across every source — Drive, SharePoint, Confluence, Slack, Notion — with cited answers, lifecycle awareness, and refusal-on-weak-context. Returns ranked chunks with source attribution, authority scores, and coverage level. Use mode=synthesis_lite (Qwen3.5 Flash) or mode=synthesis_pro (Qwen3 Max) for a written answer with [n] citations; use the default standard for a structured chunk list. quick is faster + cheaper, deep is slower + thorough. Synthesis modes consume more Knowledge Tokens than structured modes — pick the cheapest mode that answers the question. Responses are capped at 25,000 tokens per Claude Connectors policy; if the response is truncated, structured metadata carries truncated: true and query_id so the agent can call get_source_detail for full provenance.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoquick | standard | deep | synthesis_lite | synthesis_pro. Defaults to `standard`. Synthesis modes return a written answer with citations; structured modes return chunks only.
queryYesNatural-language query (1–2000 characters). Be specific — results are ranked by authority + relevance, not keyword overlap.
domainNoOptional taxonomy domain filter (e.g. 'engineering.platform'). Use `list_domains` to discover valid values for the tenant.
max_sourcesNoNumber of source chunks to return (1–20, default 5). The 25K token cap may force fewer results regardless of this value.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, open-world, non-destructive behavior. Description adds transparency about response ranking by authority+relevance, 25K token cap, truncated flag with query_id, and refusal on weak context, without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is information-dense but not overly verbose. Each sentence adds value: purpose, mode options, cost, token cap, and sibling references. Slight room for trimming but well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all necessary context: multi-source search, mode selection, cost, token limits, return values (chunks, citations, truncated flag), and lifecycle awareness. Despite no output schema, description adequately explains expected response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds meaning by explaining mode purposes (written answer vs chunks), cost differences, and token cap interaction with max_sources, enriching parameter semantics beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches company knowledge across multiple sources with cited answers. It distinguishes itself from siblings 'get_source_detail' and 'list_domains' by mentioning they provide full provenance or domain listing, respectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use each mode (synthesis vs standard, quick vs deep), cost considerations, and token cap behavior. Mentions fallback to 'get_source_detail' for truncated responses, aiding correct invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.