openalex
Server Details
OpenAlex MCP — wraps the OpenAlex API (scholarly works, free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-openalex
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes: ask_pipeworx is a general query tool, discover_tools is for tool discovery, memory tools (remember, recall, forget) handle session data, and search tools target specific academic entities. However, ask_pipeworx could overlap with the search tools in some queries, as it might internally use them, which could cause minor confusion.
Naming is mixed: search_authors, search_institutions, and search_works follow a consistent verb_noun pattern, while ask_pipeworx, discover_tools, forget, get_concept, recall, and remember use varied styles (e.g., ask_ prefix, single verbs, verb_noun). This lack of a uniform convention reduces predictability but remains readable.
With 9 tools, the count is well-scoped for the server's purpose of academic data access and session management. Each tool serves a clear role, from querying and searching to memory handling, without being excessive or insufficient.
The tool set covers core academic search functions (concepts, authors, institutions, works) and includes memory management and a general query tool. Minor gaps exist, such as no direct update or delete operations for academic data, but agents can work around this given the search-focused domain.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the tool selects the best data source automatically, fills arguments internally, and returns results. However, it lacks details on limitations (e.g., rate limits, error handling, or data freshness). The description doesn't contradict any annotations, as none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core functionality, followed by clarifying details and examples. Every sentence earns its place by explaining the tool's value proposition, usage context, and practical applications without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing to select data sources) and lack of annotations/output schema, the description is mostly complete. It covers purpose, usage, and behavior well, but could improve by mentioning potential limitations or response formats. The absence of an output schema means the description should ideally hint at return types, though it partially compensates with examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'question' well-documented in the schema. The description adds minimal value beyond the schema by emphasizing 'plain English' and 'natural language,' but doesn't provide additional syntax or format details. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input versus structured queries in other tools like search_authors or search_works.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives by contrasting with sibling tools that require specific parameters or schemas. The examples further illustrate appropriate use cases, such as factual queries or data lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and output format, but lacks details on limitations (e.g., search accuracy, performance), authentication requirements, or error handling. The mention of '500+ tools' provides some context, but more operational details would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and output, the second provides crucial usage guidance. Every phrase adds value without redundancy, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and 100% schema coverage but no output schema or annotations, the description is mostly complete. It covers purpose, usage context, and output format, though additional behavioral details (like search constraints or result ordering) would make it fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain query formatting best practices or limit implications). Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and distinguishes it from siblings by specifying it's for discovering tools rather than concepts, authors, institutions, or works. The phrase 'Returns the most relevant tools with names and descriptions' further clarifies the output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This specifies both when to use it (large catalog scenarios) and its primary role in the workflow, distinguishing it from potential alternatives like browsing or manual selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. 'Delete' implies a destructive mutation, but it doesn't disclose whether deletion is permanent, reversible, requires specific permissions, or has side effects. The description is minimal and lacks behavioral context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it immediately clear without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is inadequate. It doesn't explain what constitutes a 'stored memory', how deletion affects the system, what the response looks like, or error conditions. Given the complexity of a delete operation, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the 'key' parameter is fully documented in the schema), so the baseline is 3. The description adds no additional parameter semantics beyond what the schema already states ('Memory key to delete'), providing no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' strongly implies a destructive operation distinct from retrieval or creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'recall' (likely for retrieving memories) and 'remember' (likely for storing memories), there's no indication of prerequisites, when deletion is appropriate, or what happens if the key doesn't exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_conceptBInspect
Look up research fields or topics by name. Returns concept description, publication count, related concepts, and parent concepts in the academic hierarchy.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Concept name to look up (e.g., "deep learning") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what the tool returns without disclosing behavioral traits like error handling, rate limits, authentication needs, or whether it's read-only. It mentions the return structure but doesn't explain format, pagination, or potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that efficiently convey purpose and return values. It's front-loaded with the core function, though the second sentence could be slightly more concise. Every sentence earns its place by adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with 1 parameter and no output schema, the description adequately covers the basic purpose and return structure. However, without annotations or output schema, it should ideally provide more behavioral context about what 'look up' entails operationally and the format of returned data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single parameter 'query' well-documented in the schema. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up'), resource ('academic concept or field of study'), and scope ('by name'). It distinguishes from sibling tools like search_authors, search_institutions, and search_works by specifying it operates on concepts rather than authors, institutions, or works.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing concept information by name, but provides no explicit guidance on when to use this versus alternatives or any exclusions. It doesn't mention prerequisites, limitations, or comparison with other concept-related tools that might exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the dual behavior (retrieve by key or list all) and persistence across sessions, which is valuable. However, it doesn't disclose error handling, performance characteristics, or what happens when a non-existent key is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence explains the core functionality, and the second provides usage context. No wasted words, and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no annotations and no output schema, the description provides adequate context about what the tool does and how to use it. The main gap is the lack of information about return format or error conditions, but given the tool's simplicity, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys'), which clarifies the tool's conditional behavior beyond what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('to retrieve context you saved earlier') and when to use alternatives (implied by distinguishing from other memory tools). It also specifies the conditional logic: 'omit key' to list all memories versus providing a key to retrieve specific ones.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool performs a write operation ('Store'), specifies persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it does not cover potential errors, rate limits, or exact data formats beyond 'any text'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and behavioral details in a logical flow. Both sentences earn their place by adding distinct value—no redundancy or waste. It is appropriately sized for a simple tool with two parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 simple parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, and key behavioral traits like persistence rules. However, it lacks details on return values or error handling, which could be useful despite the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both required parameters ('key' and 'value') with examples. The description adds minimal value beyond the schema, only reinforcing the general purpose without providing additional syntax, constraints, or usage details for the parameters. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'forget' (remove) and 'recall' (retrieve). It provides concrete examples of what to store ('intermediate findings, user preferences, or context across tool calls'), making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly mention when not to use it or name alternatives. It implies usage for persistence needs but lacks direct comparison with siblings like 'recall' for retrieval or 'forget' for deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_authorsAInspect
Find researchers by name or institution affiliation. Returns author name, ORCID, institution, publication count, and total citations.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-25, default 10) | |
| query | Yes | Author name to search for (e.g., "Yoshua Bengio") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the search behavior and return fields (display name, ORCID, institution, works count, citation count), which is valuable. However, it doesn't mention rate limits, authentication requirements, pagination, or error conditions that would be important for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes essential return information. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no annotations and no output schema, the description provides basic purpose and return fields but lacks important context like result format, error handling, or performance characteristics. It's minimally adequate but has clear gaps in behavioral transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. Baseline 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search researchers and authors by name'), resource ('in OpenAlex'), and distinguishes from siblings by focusing on authors rather than concepts, institutions, or works. It provides a precise verb+resource combination with clear scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'by name in OpenAlex' and listing returned fields, but doesn't explicitly state when to use this tool versus alternatives like search_institutions or search_works. No explicit guidance on when-not-to-use or named alternatives is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_institutionsAInspect
Find academic institutions by name or location (e.g., country code 'US', 'GB'). Returns institution name, country, type, publication count, and research areas.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-25, default 10) | |
| query | Yes | Institution name to search for (e.g., "MIT") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return fields but does not describe key behavioral traits such as pagination, rate limits, authentication needs, error handling, or whether the search is case-sensitive. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, search criteria, and return fields without any wasted words. It is front-loaded with essential information and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and return fields, but lacks details on behavioral aspects (e.g., pagination, errors) and does not fully compensate for the absence of annotations and output schema, leaving some contextual gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (query and limit). The description adds minimal value beyond the schema by specifying the resource ('academic institutions') and example ('e.g., "MIT"'), but it does not provide additional semantic context like search algorithm details or result ordering. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search academic institutions'), resource ('in OpenAlex'), and scope ('by name'), distinguishing it from sibling tools like get_concept, search_authors, and search_works. It explicitly mentions what fields are returned (name, country, type, works count, top concepts), making the purpose unambiguous and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching institutions by name in OpenAlex, but it does not provide explicit guidance on when to use this tool versus alternatives (e.g., get_concept for concepts, search_authors for authors, search_works for works). No exclusions or prerequisites are mentioned, leaving the context somewhat open-ended without clear differentiation from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_worksBInspect
Search scholarly articles by title, authors, or keywords. Returns title, authors, journal, publication year, citation count, and abstract.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-25, default 10) | |
| query | Yes | Search query (e.g., "transformer neural networks") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return fields (title, authors, etc.) but lacks critical details such as pagination behavior, rate limits, authentication requirements, or error handling, which are essential for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and return values without any wasted words. It is front-loaded with the core action and resource, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with two parameters), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and return fields but lacks details on behavioral traits and usage guidelines, leaving gaps in completeness for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (query and limit) adequately. The description does not add any additional meaning or context beyond what the schema provides, such as query syntax examples or limit implications, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), resource ('scholarly works (papers, books, datasets)'), and scope ('in the OpenAlex index'), distinguishing it from sibling tools like get_concept, search_authors, and search_institutions by focusing on works rather than concepts, authors, or institutions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or comparisons with sibling tools, leaving the agent to infer usage based solely on the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!