Skip to main content
Glama

Server Details

SpecProof: Search standards specs with MCP-ready precision.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is notable overlap between search_specifications and semantic_search, as both perform search on specification documents with different methodologies. Similarly, search_contributions and semantic_search_contributions target the same document type with different search techniques, which could cause confusion in tool selection.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, with clear and descriptive naming (e.g., get_contribution_content, list_documents, search_specifications). This consistency makes the tool set predictable and easy to navigate.

Tool Count5/5

With 8 tools, the count is well-scoped for a server focused on accessing and searching specification and contribution documents. Each tool appears to serve a specific function without redundancy, fitting the domain appropriately.

Completeness4/5

The tool set provides strong coverage for retrieving and searching documents, including both specifications and contributions, with various search methods. A minor gap is the lack of update or delete operations, but given the domain likely involves read-only access to published documents, this is reasonable and agents can work around it.

Available Tools

8 tools
get_contribution_contentCInspect

Get the full content of a 3GPP contribution document by its document number.

ParametersJSON Schema
NameRequiredDescriptionDefault
doc_numberYesContribution document number (e.g., 'S4-251419')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It states the tool retrieves 'full content' but lacks details on permissions, rate limits, error handling, or output format. This is a significant gap for a tool that presumably accesses document data, making it minimally transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and efficient, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'full content' entails (e.g., text, metadata, format) or address potential complexities like authentication or errors. For a document retrieval tool, this leaves critical gaps in understanding its behavior and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents the 'doc_number' parameter. The description adds no additional semantic context beyond implying the parameter identifies a contribution document, which is already covered. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get the full content') and resource ('a 3GPP contribution document'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'get_document_content' or 'search_contributions', which might handle similar content retrieval, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't clarify if this is for retrieving raw content versus metadata, or how it differs from 'get_document_content' or 'search_contributions', leaving the agent to infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_database_statsBInspect

Get comprehensive database and system statistics including document counts, search capabilities, and performance metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'comprehensive' statistics but doesn't specify what that entails (e.g., real-time vs. cached data, permissions required, rate limits, or potential system impact). This leaves gaps in understanding how the tool behaves in practice, which is critical for a stats-retrieval operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('Get comprehensive database and system statistics') and lists specific examples without redundancy. Every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving system stats) and lack of annotations or output schema, the description is moderately complete. It outlines what statistics are included but doesn't cover behavioral aspects like data freshness, format, or error handling. For a stats tool with no structured output info, more detail would help the agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the output scope. This meets the baseline for tools with no parameters, as it doesn't add unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('database and system statistics'), listing specific categories like document counts, search capabilities, and performance metrics. It distinguishes itself from siblings like get_document_content or list_documents by focusing on system-level statistics rather than content retrieval, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage based on the purpose alone. For example, it doesn't clarify if this is for monitoring versus operational tasks or how it differs from search-related siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_document_contentCInspect

Retrieve specific content from a specification document by filename, with optional page range and section filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionNoSection title filter (optional)
filenameYesDocument filename (e.g., 'TS_23.501_Rel17.pdf')
page_rangeNoPage range like '10-15' or single page '20' (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the action ('retrieve') but lacks details on permissions, rate limits, error handling, or output format. For a read operation with no structured safety hints, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured by explicitly separating purpose from parameter hints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool with three parameters. It fails to explain what is returned (e.g., text content, metadata) or any behavioral constraints, leaving the agent with insufficient context to use the tool effectively beyond basic parameter passing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value by mentioning 'optional page range and section filtering', which aligns with the schema but does not provide additional semantic context beyond what is already in the structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and resource 'specific content from a specification document', making the purpose evident. It distinguishes from siblings like 'list_documents' (which lists documents) and 'search_specifications' (which searches across specifications), but could be more explicit about how it differs from 'get_contribution_content' or 'semantic_search'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_specifications' or 'semantic_search'. It mentions optional parameters but does not specify scenarios where this tool is preferred over others, leaving the agent to infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_documentsCInspect

List available specification documents with filtering options.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results
doc_typeNoFilter by document type (optional)
search_patternNoSearch pattern for spec number or title (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool lists documents with filtering, but doesn't reveal critical behaviors such as whether it's read-only, pagination details, rate limits, authentication needs, or what the output format looks like. This is inadequate for a tool with filtering parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose and key feature (filtering). It's front-loaded with no wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has three parameters, no annotations, and no output schema, the description is insufficiently complete. It lacks details on behavioral traits, output structure, error handling, and differentiation from siblings. For a filtering tool in a context with multiple search-related siblings, more guidance is needed to ensure correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (limit, doc_type, search_pattern). The description adds marginal value by implying filtering capabilities but doesn't provide additional semantic context beyond what's in the schema, such as how search_pattern operates or examples of usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'available specification documents', making the purpose evident. It also mentions 'filtering options' which adds specificity about functionality. However, it doesn't explicitly differentiate this tool from sibling tools like 'search_specifications' or 'semantic_search', which appear to have overlapping domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_specifications' or 'semantic_search'. It mentions filtering options but doesn't specify scenarios or prerequisites for usage, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_contributionsCInspect

Search 3GPP contribution documents by various criteria including document number, working group, meeting, work item, and agenda item.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoFree text search in title and content
work_itemNoWork item identifier (e.g., '5GMS_Pro_Ph2')
doc_numberNoContribution document number (e.g., 'S4-251419')
agenda_itemNoAgenda item number
max_resultsNoMaximum results to return
working_groupNoWorking group (e.g., 'S4', 'SA4')
meeting_numberNoMeeting number (e.g., '133')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool searches for but doesn't describe important behaviors like pagination, rate limits, authentication requirements, error conditions, or what the output format looks like (especially critical since there's no output schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core functionality. It's appropriately sized and front-loaded with the main purpose. No wasted words, though it could potentially benefit from a second sentence about output or usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 7 parameters and no output schema, the description is incomplete. It doesn't explain what results look like, how they're ordered, whether there's pagination, or what happens when no results are found. With no annotations and no output schema, users need more behavioral context than provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists several search criteria that map to parameters, but with 100% schema description coverage, the schema already documents all 7 parameters thoroughly. The description adds minimal value beyond what's in the schema - it mentions 'various criteria' but doesn't provide additional context about parameter relationships or search logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and resource ('3GPP contribution documents'), and lists specific search criteria (document number, working group, etc.). However, it doesn't explicitly differentiate from sibling tools like 'semantic_search_contributions' or 'search_specifications' which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'semantic_search_contributions' or 'list_documents'. It mentions search criteria but doesn't indicate typical use cases, prerequisites, or when other tools might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_specificationsAInspect

Search across 3GPP and IETF specification documents using full-text search. Returns ranked results with content previews.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query terms
doc_typeNoFilter by document type (optional)
max_resultsNoMaximum number of results
spec_numberNoFilter by specification number (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'full-text search' and 'ranked results with content previews', which adds some context about search behavior and output format. However, it lacks critical details like rate limits, authentication requirements, pagination behavior, or whether this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence establishes purpose and scope, the second describes the return format. No wasted words, well-structured, and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 4 parameters and 100% schema coverage but no annotations or output schema, the description provides adequate basic context about what the tool does. However, it lacks information about output structure, error conditions, or behavioral constraints that would be important for an AI agent to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 4 parameters thoroughly. The description adds minimal value beyond the schema - it mentions '3GPP and IETF specification documents' which relates to the doc_type parameter, but doesn't provide additional semantic context about parameter usage or interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search across 3GPP and IETF specification documents'), the method ('full-text search'), and the outcome ('Returns ranked results with content previews'). It distinguishes this from siblings like 'semantic_search' by specifying the search type and document scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching specification documents, but provides no explicit guidance on when to use this tool versus alternatives like 'semantic_search' or 'list_documents'. It mentions document types (3GPP/IETF) which helps scope usage, but lacks clear when/when-not rules or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

semantic_search_contributionsCInspect

Perform semantic search on 3GPP contribution documents using FAISS-accelerated vector similarity search.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query
index_typeNoFAISS index type (optional)
max_resultsNoMaximum results to return
similarity_thresholdNoMinimum similarity threshold
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the search method but doesn't cover critical aspects like performance characteristics (e.g., speed of FAISS acceleration), error handling, rate limits, authentication requirements, or what happens when no results match. This is inadequate for a tool with 4 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core functionality without unnecessary words. It's appropriately sized and front-loaded with the essential information about what the tool does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a semantic search tool with 4 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what results look like, how relevance is determined, whether results are ranked, or provide any context about the 3GPP contribution document corpus. The agent would struggle to use this effectively without additional information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it doesn't explain how 'index_type' affects results, what the similarity threshold means in practice, or provide examples of effective queries. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Perform semantic search') and the target resource ('3GPP contribution documents'), with specific technical details about the method ('FAISS-accelerated vector similarity search'). However, it doesn't explicitly differentiate from sibling tools like 'search_contributions' or 'semantic_search', which appear to offer similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_contributions' or 'semantic_search'. It lacks context about appropriate use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources