Skip to main content
Glama

Northwestern University Libraries Digital Collections API

Server Details

Agent integration with the Northwestern University Libraries Digital Collections API

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
nulib/dc-api-v2
GitHub Stars
15

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.6/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between 'search' and 'list-collections' as both can retrieve collections, and the three 'view-*' tools for search results, similar works, and works might be confused since they all serve viewing functions. However, descriptions clarify their specific contexts, reducing ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with hyphens, such as 'get-work', 'list-collections', 'search', 'similarity-search', and 'view-*' variants. This uniformity makes the tool set predictable and easy to navigate.

Tool Count5/5

With 8 tools, the count is well-scoped for a digital collections API, covering core operations like retrieval, listing, searching, and viewing without being overwhelming. Each tool appears to serve a specific, necessary function in the domain.

Completeness4/5

The tool set provides good coverage for accessing and exploring digital collections, including metadata retrieval, collection listing, search, and similarity-based discovery. A minor gap is the lack of update or delete operations, which might be intentional for a read-only API, but it covers the essential read and view workflows effectively.

Available Tools

8 tools
get-workGet WorkB
Read-onlyIdempotent
Inspect

Retrieve the full metadata for a work from the NUL Digital Collections.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_idYesThe ID of the work to retrieve
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the description's burden is lower. It adds value by specifying 'full metadata' retrieval, but doesn't disclose additional behavioral aspects like response format, error handling, or rate limits, which could be useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and appropriately sized, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on output format or error cases, which could be helpful for an agent despite the annotations covering safety and idempotency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'work_id' fully documented in the schema. The description doesn't add any meaning beyond this, such as format examples or constraints, so it meets the baseline for high schema coverage without extra param info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve') and resource ('full metadata for a work from the NUL Digital Collections'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'view-work' or 'view-search-results', which might have overlapping functionality, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'view-work' and 'search' available, there's no indication of context, prerequisites, or exclusions for selecting this specific retrieval method.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-collectionsList CollectionsB
Read-onlyIdempotent
Inspect

List collection records from the NUL Digital Collections.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page of search results to return
max_resultsNoThe maximum number of search results to return per page
public_onlyNoOnly include publicly available works in search results

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesThe search results returned from the Digital Collections API
infoYes
explainNoThe explain output from Elasticsearch for the search query. Only included if the MCP is running in debug mode.
paginationYes
aggregationsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds no additional behavioral context, such as rate limits, authentication needs, or pagination details beyond what the schema provides. It doesn't contradict annotations, but it doesn't enhance understanding either.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (a simple list operation), rich annotations (covering safety and idempotency), and the presence of an output schema (which handles return values), the description is reasonably complete. It could be improved by adding usage guidelines, but it adequately conveys the core purpose in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for all three parameters (page, max_results, public_only). The description adds no parameter-specific information beyond what the schema already states, so it meets the baseline for high schema coverage without compensating with extra insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('collection records from the NUL Digital Collections'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'search' or 'view-collection', which might also retrieve collection data in different ways, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search' or 'view-collection'. It lacks any context about prerequisites, typical use cases, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view-collectionView CollectionA
Read-onlyIdempotent
Inspect

View a collection from the NUL Digital Collections in an interactive viewer.

ParametersJSON Schema
NameRequiredDescriptionDefault
collection_idYesThe ID of the collection to view
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's bar is lower. It adds valuable context by specifying 'in an interactive viewer', which suggests a rich UI experience beyond simple data retrieval—information not captured in annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose ('View a collection') and efficiently adds context ('from the NUL Digital Collections in an interactive viewer'). Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, rich annotations, no output schema), the description is reasonably complete. It clarifies the interactive nature of viewing, which annotations don't cover. However, it lacks details on output format or error handling, leaving minor gaps for a tool without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the single required parameter 'collection_id'. The description adds no additional parameter details beyond what's in the schema, so it meets the baseline of 3 without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('view') and resource ('a collection from the NUL Digital Collections') with specific context ('in an interactive viewer'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'view-work' or 'view-search-results', which likely serve similar viewing functions for different resource types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when wanting to view a collection interactively, but provides no explicit guidance on when to choose this over alternatives like 'list-collections' or 'view-work'. There's no mention of prerequisites (e.g., needing a collection ID) or exclusions, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view-search-resultsView Search ResultsB
Read-onlyIdempotent
Inspect

View results from the search-works tool in an interactive viewer.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page of search results to return
queryNoA natural language query. Best for exploratory / conceptual searches.
fieldsNoStructured field search. Best when searching for specific known items or values, or for narrowing a search by specifying particular fields to search within.
max_resultsNoThe maximum number of search results to return per page
public_onlyNoOnly include publicly available works in search results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about the 'interactive viewer', which suggests a user interface component, but does not disclose additional behavioral traits like rate limits, authentication needs, or what 'interactive' entails beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is appropriately sized and front-loaded, with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, nested objects) and rich schema annotations, the description is minimal. It lacks details on output format, interactive viewer behavior, and usage context, but annotations cover safety and idempotency. Without an output schema, more completeness would be beneficial, but it meets a basic threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with detailed descriptions for all parameters, including 'query' and 'fields' with usage notes. The description does not add meaning beyond the schema, as it mentions no parameters. Baseline 3 is appropriate since the schema fully documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'View results from the search-works tool in an interactive viewer.' It specifies the verb ('view') and resource ('results from the search-works tool'), but does not distinguish it from sibling tools like 'search' or 'view-work' beyond mentioning the interactive viewer aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance, only stating that it views results from 'search-works' in an interactive viewer. It does not explain when to use this tool versus alternatives like 'search', 'view-work', or 'view-similar-works', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view-similar-worksView Similar WorksA
Read-onlyIdempotent
Inspect

View results from the similarity-search tool in an interactive viewer.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page of search results to return
work_idYesID of a work to find similar items for.
max_resultsNoThe maximum number of search results to return per page
public_onlyNoOnly include publicly available works in search results
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, idempotent, and closed-world behavior. The description adds value by specifying the 'interactive viewer' aspect, which suggests a user-facing display rather than raw data output, and implies pagination and filtering through parameters like 'page' and 'public_only.' It doesn't contradict annotations and provides useful context beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary words. It's front-loaded with the core action and context, making it easy to understand at a glance. Every part of the sentence earns its place by specifying key elements like the source tool and viewer type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, 1 required), rich annotations covering safety and behavior, and 100% schema coverage, the description is reasonably complete. It explains the interactive viewer aspect, which adds context not in structured fields. However, without an output schema, it could benefit from hinting at return types, but annotations help mitigate this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any specific parameter details beyond what's in the schema, such as explaining 'work_id' usage or 'public_only' implications. Baseline score of 3 is appropriate as the schema carries the burden, but no extra semantic value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('View results') and source ('from the similarity-search tool') with the destination ('in an interactive viewer'). It distinguishes from basic search tools by specifying the interactive viewer aspect, though it doesn't explicitly differentiate from sibling tools like 'view-search-results' or 'similarity-search' beyond mentioning the source tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by referencing 'similarity-search tool,' suggesting this is for viewing results from that specific operation. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'view-search-results' or 'similarity-search' directly, nor does it mention prerequisites or exclusions beyond the implied context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view-workView WorkA
Read-onlyIdempotent
Inspect

View a work from the NUL Digital Collections in an interactive viewer.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_idYesThe ID of the work to view
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds value by specifying the interactive viewer context, which isn't covered by annotations. However, it doesn't disclose additional traits like potential rate limits, authentication needs, or what the interactive viewer entails (e.g., browser-based, requires specific permissions). No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose without any wasted words. It's front-loaded with the core action ('view a work') and includes essential context ('from the NUL Digital Collections in an interactive viewer'). Every part of the sentence earns its place by adding clarity, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations covering safety and behavior, the description is mostly complete. It specifies the interactive viewer aspect, which adds useful context beyond annotations. However, it could be more complete by briefly mentioning what the viewer does or linking to sibling tools for alternatives, but the annotations and schema provide sufficient coverage for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'work_id' fully documented in the schema as 'The ID of the work to view.' The description doesn't add any extra meaning beyond this, such as format examples or constraints. Since the schema handles the parameter documentation adequately, the baseline score of 3 is appropriate, reflecting that the description doesn't compensate but also doesn't need to given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('view') and resource ('a work from the NUL Digital Collections') with the specific context of 'in an interactive viewer.' It distinguishes from siblings like 'get-work' by emphasizing the interactive viewing experience rather than just data retrieval. However, it doesn't explicitly contrast with 'view-collection' or 'view-search-results' in terms of scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you want to interactively view a specific work, but it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get-work' (which might return metadata without an interactive viewer) or 'view-search-results' (which handles multiple works). There's no mention of prerequisites or when-not-to-use scenarios, leaving usage context somewhat implied rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.