Skip to main content
Glama

openalex

Server Details

OpenAlex MCP — wraps the OpenAlex API (scholarly works, free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-openalex
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different academic entities: concepts, authors, institutions, and works. The descriptions specify unique resources and return fields, eliminating any overlap or confusion between tools.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with 'get_' or 'search_' prefixes, using snake_case uniformly. This predictable naming makes it easy for agents to understand and select the appropriate tool.

Tool Count4/5

Four tools is a reasonable count for an academic search server, covering key entities. However, it feels slightly thin as it lacks update/delete operations, but this is appropriate for a search-focused domain where such actions might not be needed.

Completeness4/5

The tool set provides good coverage for searching academic entities, with no obvious gaps for the stated purpose. Minor gaps include the lack of advanced filtering options or CRUD operations, but agents can work effectively with the provided search and get functionalities.

Available Tools

4 tools
get_conceptBInspect

Look up an academic concept or field of study by name. Returns description, works count, related concepts, and ancestor concepts in the hierarchy.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesConcept name to look up (e.g., "deep learning")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what the tool returns without disclosing behavioral traits like error handling, rate limits, authentication needs, or whether it's read-only. It mentions the return structure but doesn't explain format, pagination, or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey purpose and return values. It's front-loaded with the core function, though the second sentence could be slightly more concise. Every sentence earns its place by adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with 1 parameter and no output schema, the description adequately covers the basic purpose and return structure. However, without annotations or output schema, it should ideally provide more behavioral context about what 'look up' entails operationally and the format of returned data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'query' well-documented in the schema. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('look up'), resource ('academic concept or field of study'), and scope ('by name'). It distinguishes from sibling tools like search_authors, search_institutions, and search_works by specifying it operates on concepts rather than authors, institutions, or works.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing concept information by name, but provides no explicit guidance on when to use this versus alternatives or any exclusions. It doesn't mention prerequisites, limitations, or comparison with other concept-related tools that might exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_authorsAInspect

Search researchers and authors by name in OpenAlex. Returns display name, ORCID, institution, works count, and citation count.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-25, default 10)
queryYesAuthor name to search for (e.g., "Yoshua Bengio")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the search behavior and return fields (display name, ORCID, institution, works count, citation count), which is valuable. However, it doesn't mention rate limits, authentication requirements, pagination, or error conditions that would be important for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes essential return information. Every word earns its place with zero waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides basic purpose and return fields but lacks important context like result format, error handling, or performance characteristics. It's minimally adequate but has clear gaps in behavioral transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. Baseline 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search researchers and authors by name'), resource ('in OpenAlex'), and distinguishes from siblings by focusing on authors rather than concepts, institutions, or works. It provides a precise verb+resource combination with clear scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by name in OpenAlex' and listing returned fields, but doesn't explicitly state when to use this tool versus alternatives like search_institutions or search_works. No explicit guidance on when-not-to-use or named alternatives is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_institutionsAInspect

Search academic institutions (universities, research labs) by name in OpenAlex. Returns name, country, type, works count, and top concepts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-25, default 10)
queryYesInstitution name to search for (e.g., "MIT")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return fields but does not describe key behavioral traits such as pagination, rate limits, authentication needs, error handling, or whether the search is case-sensitive. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, search criteria, and return fields without any wasted words. It is front-loaded with essential information and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and return fields, but lacks details on behavioral aspects (e.g., pagination, errors) and does not fully compensate for the absence of annotations and output schema, leaving some contextual gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (query and limit). The description adds minimal value beyond the schema by specifying the resource ('academic institutions') and example ('e.g., "MIT"'), but it does not provide additional semantic context like search algorithm details or result ordering. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search academic institutions'), resource ('in OpenAlex'), and scope ('by name'), distinguishing it from sibling tools like get_concept, search_authors, and search_works. It explicitly mentions what fields are returned (name, country, type, works count, top concepts), making the purpose unambiguous and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching institutions by name in OpenAlex, but it does not provide explicit guidance on when to use this tool versus alternatives (e.g., get_concept for concepts, search_authors for authors, search_works for works). No exclusions or prerequisites are mentioned, leaving the context somewhat open-ended without clear differentiation from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_worksBInspect

Search scholarly works (papers, books, datasets) in the OpenAlex index. Returns title, authors, journal, year, citation count, and abstract.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-25, default 10)
queryYesSearch query (e.g., "transformer neural networks")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return fields (title, authors, etc.) but lacks critical details such as pagination behavior, rate limits, authentication requirements, or error handling, which are essential for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and return values without any wasted words. It is front-loaded with the core action and resource, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with two parameters), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and return fields but lacks details on behavioral traits and usage guidelines, leaving gaps in completeness for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (query and limit) adequately. The description does not add any additional meaning or context beyond what the schema provides, such as query syntax examples or limit implications, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('scholarly works (papers, books, datasets)'), and scope ('in the OpenAlex index'), distinguishing it from sibling tools like get_concept, search_authors, and search_institutions by focusing on works rather than concepts, authors, or institutions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or comparisons with sibling tools, leaving the agent to infer usage based solely on the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.