HefestoAI
Server Details
Pre-commit code quality guardian. Detects semantic drift in AI-generated code.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- artvepa80/Agents-Hefesto
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsanalyzeAInspect
Analyze a code snippet for quality issues and semantic drift
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Code snippet to analyze | |
| language | No | Programming language (python, javascript, etc) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It specifies analysis targets (quality issues, semantic drift) but omits critical behavioral traits: whether this is read-only (implied but not stated), whether it requires external API calls, rate limits, or what the analysis output format looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 9 words, zero waste. Front-loaded with verb 'Analyze'. Technical terms 'quality issues' and 'semantic drift' precisely constrain scope without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple 2-parameter tool with no nested objects, but lacks output schema. Description should ideally explain what analysis results are returned (structured report, list of issues, scores) since no output schema exists to document return values. Adequate but incomplete for invocation confidence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both 'code' and 'language' fully documented. Description mentions 'code snippet' which aligns with the 'code' parameter, but adds no syntax details, validation rules, or behavior when 'language' is omitted (auto-detection vs failure). Baseline 3 appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Analyze' with clear resource 'code snippet' and distinct scope 'quality issues and semantic drift'. Distinct from siblings: 'compare' implies differential analysis, 'install' implies package management, and 'pricing' implies cost retrieval, while this targets static code quality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While the scope is clear from the functionality described, there are no 'when to use' or 'when not to use' indicators, nor prerequisites mentioned (e.g., code size limits, supported languages).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compareBInspect
Compare HefestoAI with other code quality tools
| Name | Required | Description | Default |
|---|---|---|---|
| tool | No | Tool to compare with (sonarqube, snyk, github-advanced-security, semgrep, claude-code-security) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure, but only establishes the subject (HefestoAI). It fails to disclose return format, whether this calls external APIs, or what happens when the optional 'tool' parameter is omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at 7 words single sentence. Front-loaded with the action and subject, containing no redundant or filler content. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with complete schema documentation, the description adequately identifies the comparison domain. However, lacking both output schema and behavioral details on optional parameter usage, it stops short of being fully self-sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'other code quality tools' which aligns with the schema's enum-like list, but adds no additional semantic context about parameter syntax, optional behavior, or valid values beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a specific action (compare) and clear scope (HefestoAI vs other tools), distinguishing it from siblings like 'analyze' or 'pricing'. However, it lacks specificity about what dimensions are compared (features, pricing, security) and omits output format details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to prefer this over 'analyze' or 'pricing', or when to omit the optional 'tool' parameter (per context signals). No mention of prerequisites or comparison methodology.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
installAInspect
Get installation instructions for HefestoAI
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full behavioral burden but fails to disclose: read-only nature (critical given name 'install' implies mutation), return format, platform support, or auth requirements. Only states what is retrieved, not behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 6-word sentence is front-loaded with action verb. No redundancy or waste. Appropriate for zero-parameter tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple retrieval tool but omits return value specification (crucial given no output schema). Doesn't clarify instruction format, platform-specific handling, or prerequisites.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline of 4 per rubric. Description implicitly confirms target scope (HefestoAI) which schema lacks, earning baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with clear resource 'installation instructions' and scope 'for HefestoAI'. Distinct from siblings (analyze/compare/pricing) which handle different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus sibling tools or prerequisites. While the name makes purpose obvious, it lacks 'when-not' exclusions or context triggers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pricingBInspect
Get HefestoAI pricing information
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description lacks critical details such as whether the pricing data is static or dynamic, if authentication is required, rate limits, or what happens if HefestoAI services are unavailable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely compact at only four words, with zero redundant content. Every word earns its place, and the structure is appropriately front-loaded for a simple retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description minimally suffices, but given the lack of output schema and annotations, it should ideally mention what specific pricing information is returned (e.g., tiers, per-unit costs, subscription plans) or the expected response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline score of 4 per evaluation rules. No additional parameter context is needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Get') and resource ('HefestoAI pricing information'), making the basic purpose understandable. However, it fails to differentiate from sibling tools like 'analyze' or 'compare' which could potentially also handle pricing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings (e.g., whether to use 'pricing' for simple lookups versus 'analyze' for detailed cost breakdowns), nor any prerequisites or authentication requirements mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!