Northwestern University Libraries Digital Collections API
Server Details
Agent integration with the Northwestern University Libraries Digital Collections API
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- nulib/dc-api-v2
- GitHub Stars
- 15
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 8 of 8 tools scored.
Most tools have distinct purposes, but there is some overlap between 'search' and 'list-collections' as both can retrieve collections, and the three 'view-*' tools for search results, similar works, and works might be confused since they all serve viewing functions. However, descriptions clarify their specific contexts, reducing ambiguity.
All tool names follow a consistent verb_noun pattern with hyphens, such as 'get-work', 'list-collections', 'search', 'similarity-search', and 'view-*' variants. This uniformity makes the tool set predictable and easy to navigate.
With 8 tools, the count is well-scoped for a digital collections API, covering core operations like retrieval, listing, searching, and viewing without being overwhelming. Each tool appears to serve a specific, necessary function in the domain.
The tool set provides good coverage for accessing and exploring digital collections, including metadata retrieval, collection listing, search, and similarity-based discovery. A minor gap is the lack of update or delete operations, which might be intentional for a read-only API, but it covers the essential read and view workflows effectively.
Available Tools
8 toolsget-workGet WorkBRead-onlyIdempotentInspect
Retrieve the full metadata for a work from the NUL Digital Collections.
| Name | Required | Description | Default |
|---|---|---|---|
| work_id | Yes | The ID of the work to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the description's burden is lower. It adds value by specifying 'full metadata' retrieval, but doesn't disclose additional behavioral aspects like response format, error handling, or rate limits, which could be useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and appropriately sized, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on output format or error cases, which could be helpful for an agent despite the annotations covering safety and idempotency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'work_id' fully documented in the schema. The description doesn't add any meaning beyond this, such as format examples or constraints, so it meets the baseline for high schema coverage without extra param info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and resource ('full metadata for a work from the NUL Digital Collections'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'view-work' or 'view-search-results', which might have overlapping functionality, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'view-work' and 'search' available, there's no indication of context, prerequisites, or exclusions for selecting this specific retrieval method.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list-collectionsList CollectionsBRead-onlyIdempotentInspect
List collection records from the NUL Digital Collections.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page of search results to return | |
| max_results | No | The maximum number of search results to return per page | |
| public_only | No | Only include publicly available works in search results |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The search results returned from the Digital Collections API |
| info | Yes | |
| explain | No | The explain output from Elasticsearch for the search query. Only included if the MCP is running in debug mode. |
| pagination | Yes | |
| aggregations | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds no additional behavioral context, such as rate limits, authentication needs, or pagination details beyond what the schema provides. It doesn't contradict annotations, but it doesn't enhance understanding either.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (a simple list operation), rich annotations (covering safety and idempotency), and the presence of an output schema (which handles return values), the description is reasonably complete. It could be improved by adding usage guidelines, but it adequately conveys the core purpose in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation for all three parameters (page, max_results, public_only). The description adds no parameter-specific information beyond what the schema already states, so it meets the baseline for high schema coverage without compensating with extra insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('collection records from the NUL Digital Collections'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'search' or 'view-collection', which might also retrieve collection data in different ways, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search' or 'view-collection'. It lacks any context about prerequisites, typical use cases, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchSearchARead-onlyIdempotentInspect
Search for works in the Digital Collections using field-based and/or natural language queries. If both a natural language query and specific field values are provided, the natural language query will take priority, using the specified field values as additional constraints. The result will also include a list of aggregations that show how many results match different values for certain fields. For example, you could see how many results match each collection, work type, or visibility and use that information to refine your search. Perform an empty search to retrieve all works and their aggregations. NOTE: Structured field values enclosed in double quotes will be treated as exact, case-sensitive matches, while unquoted values will be treated as full-text searches.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page of search results to return | |
| query | No | A natural language query. Best for exploratory / conceptual searches. | |
| fields | No | Structured field search. Best when searching for specific known items or values, or for narrowing a search by specifying particular fields to search within. | |
| max_results | No | The maximum number of search results to return per page | |
| public_only | No | Only include publicly available works in search results |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The search results returned from the Digital Collections API |
| info | Yes | |
| explain | No | The explain output from Elasticsearch for the search query. Only included if the MCP is running in debug mode. |
| pagination | Yes | |
| aggregations | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. Annotations indicate read-only, non-destructive, and idempotent operations, but the description details how queries are prioritized (natural language takes precedence over field values), includes aggregations in results, and explains exact vs. full-text matching for structured values. This enhances the agent's understanding of the tool's behavior without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the main purpose. It efficiently covers key points like query priority, aggregations, and search behavior in a few sentences. However, the note about structured field values is slightly dense and could be more concise, but overall, it avoids unnecessary repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, nested objects, and rich input schema), the description is complete. It explains the search functionality, result inclusions (aggregations), and behavioral nuances. With annotations covering safety and an output schema likely detailing results, no critical information is missing for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents all parameters thoroughly. The description adds some semantic context, such as the priority between 'query' and 'fields' and the behavior of empty searches, but does not provide additional meaning for individual parameters beyond what the schema offers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for works in the Digital Collections using field-based and/or natural language queries.' It specifies the resource ('works in the Digital Collections') and the action ('search'), and distinguishes it from siblings like 'get-work' (which retrieves a specific work) and 'similarity-search' (which likely finds similar items).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for searching works with natural language or field-based queries. It mentions performing an empty search to retrieve all works, which is a specific use case. However, it does not explicitly state when to use alternatives like 'similarity-search' or 'list-collections', though the distinction is implied by the search functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
similarity-searchSearch for Similar WorksBRead-onlyIdempotentInspect
Find works that are similar to a given work. Uses semantic similarity based on work embeddings.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page of search results to return | |
| work_id | Yes | ID of a work to find similar items for. | |
| max_results | No | The maximum number of search results to return per page | |
| public_only | No | Only include publicly available works in search results |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | The search results returned from the Digital Collections API |
| info | Yes | |
| explain | No | The explain output from Elasticsearch for the search query. Only included if the MCP is running in debug mode. |
| pagination | Yes | |
| aggregations | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds useful context about using 'semantic similarity based on work embeddings' which explains the matching methodology, but doesn't disclose behavioral traits like rate limits, authentication needs, or what 'public_only' filtering entails beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that directly state the tool's purpose and methodology. Every word earns its place with zero wasted text, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has comprehensive annotations (read-only, idempotent, closed-world), 100% schema coverage, and an output schema exists, the description provides adequate context. It explains the semantic similarity approach which adds value beyond structured fields, though usage guidance relative to siblings is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 4 parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find works that are similar to a given work' (specific verb+resource). It distinguishes from siblings like 'get-work' or 'search' by specifying semantic similarity based on embeddings, but doesn't explicitly contrast with 'view-similar-works' which appears to be a related sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'search' and 'view-similar-works' available, there's no indication of when this similarity search is preferred over general search or viewing existing similar works results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view-collectionView CollectionARead-onlyIdempotentInspect
View a collection from the NUL Digital Collections in an interactive viewer.
| Name | Required | Description | Default |
|---|---|---|---|
| collection_id | Yes | The ID of the collection to view |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's bar is lower. It adds valuable context by specifying 'in an interactive viewer', which suggests a rich UI experience beyond simple data retrieval—information not captured in annotations. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose ('View a collection') and efficiently adds context ('from the NUL Digital Collections in an interactive viewer'). Every word earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, rich annotations, no output schema), the description is reasonably complete. It clarifies the interactive nature of viewing, which annotations don't cover. However, it lacks details on output format or error handling, leaving minor gaps for a tool without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents the single required parameter 'collection_id'. The description adds no additional parameter details beyond what's in the schema, so it meets the baseline of 3 without compensating further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('view') and resource ('a collection from the NUL Digital Collections') with specific context ('in an interactive viewer'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'view-work' or 'view-search-results', which likely serve similar viewing functions for different resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when wanting to view a collection interactively, but provides no explicit guidance on when to choose this over alternatives like 'list-collections' or 'view-work'. There's no mention of prerequisites (e.g., needing a collection ID) or exclusions, leaving usage context somewhat vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view-search-resultsView Search ResultsBRead-onlyIdempotentInspect
View results from the search-works tool in an interactive viewer.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page of search results to return | |
| query | No | A natural language query. Best for exploratory / conceptual searches. | |
| fields | No | Structured field search. Best when searching for specific known items or values, or for narrowing a search by specifying particular fields to search within. | |
| max_results | No | The maximum number of search results to return per page | |
| public_only | No | Only include publicly available works in search results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about the 'interactive viewer', which suggests a user interface component, but does not disclose additional behavioral traits like rate limits, authentication needs, or what 'interactive' entails beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is appropriately sized and front-loaded, with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, nested objects) and rich schema annotations, the description is minimal. It lacks details on output format, interactive viewer behavior, and usage context, but annotations cover safety and idempotency. Without an output schema, more completeness would be beneficial, but it meets a basic threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for all parameters, including 'query' and 'fields' with usage notes. The description does not add meaning beyond the schema, as it mentions no parameters. Baseline 3 is appropriate since the schema fully documents parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'View results from the search-works tool in an interactive viewer.' It specifies the verb ('view') and resource ('results from the search-works tool'), but does not distinguish it from sibling tools like 'search' or 'view-work' beyond mentioning the interactive viewer aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance, only stating that it views results from 'search-works' in an interactive viewer. It does not explain when to use this tool versus alternatives like 'search', 'view-work', or 'view-similar-works', nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view-similar-worksView Similar WorksARead-onlyIdempotentInspect
View results from the similarity-search tool in an interactive viewer.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page of search results to return | |
| work_id | Yes | ID of a work to find similar items for. | |
| max_results | No | The maximum number of search results to return per page | |
| public_only | No | Only include publicly available works in search results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, idempotent, and closed-world behavior. The description adds value by specifying the 'interactive viewer' aspect, which suggests a user-facing display rather than raw data output, and implies pagination and filtering through parameters like 'page' and 'public_only.' It doesn't contradict annotations and provides useful context beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary words. It's front-loaded with the core action and context, making it easy to understand at a glance. Every part of the sentence earns its place by specifying key elements like the source tool and viewer type.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, 1 required), rich annotations covering safety and behavior, and 100% schema coverage, the description is reasonably complete. It explains the interactive viewer aspect, which adds context not in structured fields. However, without an output schema, it could benefit from hinting at return types, but annotations help mitigate this gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any specific parameter details beyond what's in the schema, such as explaining 'work_id' usage or 'public_only' implications. Baseline score of 3 is appropriate as the schema carries the burden, but no extra semantic value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('View results') and source ('from the similarity-search tool') with the destination ('in an interactive viewer'). It distinguishes from basic search tools by specifying the interactive viewer aspect, though it doesn't explicitly differentiate from sibling tools like 'view-search-results' or 'similarity-search' beyond mentioning the source tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by referencing 'similarity-search tool,' suggesting this is for viewing results from that specific operation. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'view-search-results' or 'similarity-search' directly, nor does it mention prerequisites or exclusions beyond the implied context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view-workView WorkARead-onlyIdempotentInspect
View a work from the NUL Digital Collections in an interactive viewer.
| Name | Required | Description | Default |
|---|---|---|---|
| work_id | Yes | The ID of the work to view |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds value by specifying the interactive viewer context, which isn't covered by annotations. However, it doesn't disclose additional traits like potential rate limits, authentication needs, or what the interactive viewer entails (e.g., browser-based, requires specific permissions). No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose without any wasted words. It's front-loaded with the core action ('view a work') and includes essential context ('from the NUL Digital Collections in an interactive viewer'). Every part of the sentence earns its place by adding clarity, making it highly concise and effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations covering safety and behavior, the description is mostly complete. It specifies the interactive viewer aspect, which adds useful context beyond annotations. However, it could be more complete by briefly mentioning what the viewer does or linking to sibling tools for alternatives, but the annotations and schema provide sufficient coverage for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'work_id' fully documented in the schema as 'The ID of the work to view.' The description doesn't add any extra meaning beyond this, such as format examples or constraints. Since the schema handles the parameter documentation adequately, the baseline score of 3 is appropriate, reflecting that the description doesn't compensate but also doesn't need to given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('view') and resource ('a work from the NUL Digital Collections') with the specific context of 'in an interactive viewer.' It distinguishes from siblings like 'get-work' by emphasizing the interactive viewing experience rather than just data retrieval. However, it doesn't explicitly contrast with 'view-collection' or 'view-search-results' in terms of scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you want to interactively view a specific work, but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get-work' (which might return metadata without an interactive viewer) or 'view-search-results' (which handles multiple works). There's no mention of prerequisites or when-not-to-use scenarios, leaving usage context somewhat implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!