Skip to main content
Glama

Server Details

Superhero MCP — wraps akabab.github.io/superhero-api (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-superhero
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 4 of 4 tools scored.

Server CoherenceC
Disambiguation2/5

There is significant overlap between get_biography, get_hero, and get_powerstats, as all three retrieve data for a superhero by ID, with get_hero appearing to encompass the others. This creates ambiguity about which tool to use for specific data needs, as agents might struggle to choose between them for partial vs. full information.

Naming Consistency4/5

The tool names follow a consistent verb_noun pattern (get_biography, get_hero, get_powerstats, list_all) with clear verbs and nouns, making them readable and predictable. However, the use of 'hero' in get_hero while others use 'superhero' is a minor deviation, but overall naming is coherent.

Tool Count3/5

With 4 tools, the count is borderline for a superhero database server; it feels thin as it lacks essential operations like search, filter, or CRUD actions (e.g., create, update, delete), which limits functionality. While manageable, the scope seems incomplete for typical database interactions.

Completeness2/5

The tool set has significant gaps for a superhero domain: it only provides read operations (get and list) with no ability to search by name, filter by attributes (e.g., alignment, publisher), or perform any create, update, or delete actions. This will cause agent failures when trying to perform common tasks beyond basic retrieval.

Available Tools

4 tools
get_biographyBInspect

Get biography details (full name, aliases, publisher, first appearance, alignment) for a superhero by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNumeric superhero ID (1–731)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it's a read operation ('Get'), but doesn't cover aspects like error handling (e.g., what happens if ID is out of range 1–731), rate limits, authentication needs, or response format. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details (data fields and ID requirement). Every word earns its place with no redundancy or unnecessary information, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single required parameter, no nested objects) and no output schema, the description is minimally adequate. It covers what data is retrieved but lacks details on behavioral aspects like error handling or response structure. With no annotations to fill gaps, it should do more to be fully complete, but it meets the basic threshold for a simple read tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the input schema fully documenting the 'id' parameter as a numeric superhero ID in range 1–731. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or edge cases. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('biography details for a superhero by ID'), specifying the exact data fields retrieved (full name, aliases, publisher, first appearance, alignment). It distinguishes from siblings like 'get_hero' and 'get_powerstats' by focusing on biography details rather than general hero info or power stats. However, it doesn't explicitly contrast with 'list_all', which might return broader lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description implies usage for retrieving biography details by ID, but it doesn't mention when to choose this over 'get_hero' (which might include biography) or 'list_all' (which might list heroes without details). There are no explicit when/when-not statements or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_heroBInspect

Get full data for a superhero by their numeric ID, including powerstats, biography, appearance, and images.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNumeric superhero ID (1–731)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool as a read operation ('Get full data'), which implies it's non-destructive, but doesn't explicitly state safety aspects like read-only nature, authentication needs, rate limits, or error handling. The description adds minimal behavioral context beyond the basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Get full data') and includes all necessary details without redundancy. Every word earns its place by specifying the resource, parameter, and data inclusions clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and high schema coverage, the description is adequate for basic understanding. However, with no annotations and no output schema, it lacks details on behavioral traits (e.g., error handling) and return values, which could be important for an AI agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'id' parameter documented as 'Numeric superhero ID (1–731)'. The description adds value by reinforcing the parameter's purpose ('by their numeric ID') and implying the data returned, but doesn't provide additional syntax, format details, or constraints beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get full data') and resource ('a superhero by their numeric ID'), and specifies what data is included ('powerstats, biography, appearance, and images'). It distinguishes from siblings like get_biography and get_powerstats by indicating it returns comprehensive data rather than specific subsets. However, it doesn't explicitly contrast with list_all, which might retrieve multiple heroes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating 'by their numeric ID', suggesting this tool is for retrieving detailed information about a specific hero when the ID is known. However, it provides no explicit guidance on when to use this versus alternatives like get_biography for partial data or list_all for multiple heroes, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_powerstatsBInspect

Get power statistics (intelligence, strength, speed, durability, power, combat) for a superhero by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNumeric superhero ID (1–731)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states it's a read operation ('Get'), implying non-destructive, but doesn't disclose behavioral traits like error handling (e.g., for invalid IDs), rate limits, authentication needs, or response format. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It front-loads the purpose and includes all necessary details (statistics listed, resource specified). Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema, no annotations), the description is adequate but has clear gaps. It covers the purpose and parameters indirectly via schema, but lacks output details, error handling, and usage guidelines. For a simple read tool, it meets minimum viability but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'id' fully documented in the schema (numeric, required, range 1–731). The description adds no additional parameter semantics beyond what the schema provides, such as format details or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('power statistics for a superhero by ID'), specifying the six specific statistics (intelligence, strength, speed, durability, power, combat). It distinguishes from siblings like 'get_biography' (biographical data) and 'list_all' (listing heroes), but doesn't explicitly contrast with 'get_hero' (which might return broader hero data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, when not to use it, or how it differs from 'get_hero' (which could potentially include power stats). Usage is implied by the purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_allAInspect

List all superheroes in the database with their IDs, names, and slugs.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It states the tool lists data but does not disclose behavioral traits like whether it's read-only, pagination behavior, rate limits, authentication needs, or error handling. This leaves significant gaps for an agent to understand operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose ('List all superheroes') and specifies the data returned. Every word adds value without redundancy, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate for a basic list operation. However, it lacks details on output format (e.g., structure of the list) and behavioral context (e.g., performance or limitations), which could be helpful for an agent despite the low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately does not include parameter details, earning a baseline score of 4 for not adding unnecessary information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all superheroes in the database'), specifying the exact data returned ('IDs, names, and slugs'). It distinguishes from siblings like 'get_biography' (specific biography), 'get_hero' (single hero), and 'get_powerstats' (power statistics) by emphasizing comprehensive listing without filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a complete list of superheroes, which contrasts with siblings that fetch specific data (e.g., 'get_hero' for a single hero). However, it lacks explicit guidance on when not to use it or alternatives, such as advising against it for filtered searches or when only partial data is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.