Skip to main content
Glama

Server Details

Pre-commit code quality guardian. Detects semantic drift in AI-generated code.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
artvepa80/Agents-Hefesto
GitHub Stars
1

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
analyzeAInspect

Analyze a code snippet for quality issues and semantic drift

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesCode snippet to analyze
languageNoProgramming language (python, javascript, etc)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It specifies analysis targets (quality issues, semantic drift) but omits critical behavioral traits: whether this is read-only (implied but not stated), whether it requires external API calls, rate limits, or what the analysis output format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 9 words, zero waste. Front-loaded with verb 'Analyze'. Technical terms 'quality issues' and 'semantic drift' precisely constrain scope without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple 2-parameter tool with no nested objects, but lacks output schema. Description should ideally explain what analysis results are returned (structured report, list of issues, scores) since no output schema exists to document return values. Adequate but incomplete for invocation confidence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both 'code' and 'language' fully documented. Description mentions 'code snippet' which aligns with the 'code' parameter, but adds no syntax details, validation rules, or behavior when 'language' is omitted (auto-detection vs failure). Baseline 3 appropriate when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Analyze' with clear resource 'code snippet' and distinct scope 'quality issues and semantic drift'. Distinct from siblings: 'compare' implies differential analysis, 'install' implies package management, and 'pricing' implies cost retrieval, while this targets static code quality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While the scope is clear from the functionality described, there are no 'when to use' or 'when not to use' indicators, nor prerequisites mentioned (e.g., code size limits, supported languages).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compareBInspect

Compare HefestoAI with other code quality tools

ParametersJSON Schema
NameRequiredDescriptionDefault
toolNoTool to compare with (sonarqube, snyk, github-advanced-security, semgrep, claude-code-security)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure, but only establishes the subject (HefestoAI). It fails to disclose return format, whether this calls external APIs, or what happens when the optional 'tool' parameter is omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at 7 words single sentence. Front-loaded with the action and subject, containing no redundant or filler content. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema documentation, the description adequately identifies the comparison domain. However, lacking both output schema and behavioral details on optional parameter usage, it stops short of being fully self-sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'other code quality tools' which aligns with the schema's enum-like list, but adds no additional semantic context about parameter syntax, optional behavior, or valid values beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific action (compare) and clear scope (HefestoAI vs other tools), distinguishing it from siblings like 'analyze' or 'pricing'. However, it lacks specificity about what dimensions are compared (features, pricing, security) and omits output format details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to prefer this over 'analyze' or 'pricing', or when to omit the optional 'tool' parameter (per context signals). No mention of prerequisites or comparison methodology.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

installAInspect

Get installation instructions for HefestoAI

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full behavioral burden but fails to disclose: read-only nature (critical given name 'install' implies mutation), return format, platform support, or auth requirements. Only states what is retrieved, not behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 6-word sentence is front-loaded with action verb. No redundancy or waste. Appropriate for zero-parameter tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple retrieval tool but omits return value specification (crucial given no output schema). Doesn't clarify instruction format, platform-specific handling, or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, establishing baseline of 4 per rubric. Description implicitly confirms target scope (HefestoAI) which schema lacks, earning baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Get' with clear resource 'installation instructions' and scope 'for HefestoAI'. Distinct from siblings (analyze/compare/pricing) which handle different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus sibling tools or prerequisites. While the name makes purpose obvious, it lacks 'when-not' exclusions or context triggers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pricingBInspect

Get HefestoAI pricing information

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description lacks critical details such as whether the pricing data is static or dynamic, if authentication is required, rate limits, or what happens if HefestoAI services are unavailable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely compact at only four words, with zero redundant content. Every word earns its place, and the structure is appropriately front-loaded for a simple retrieval tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool, the description minimally suffices, but given the lack of output schema and annotations, it should ideally mention what specific pricing information is returned (e.g., tiers, per-unit costs, subscription plans) or the expected response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, establishing a baseline score of 4 per evaluation rules. No additional parameter context is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Get') and resource ('HefestoAI pricing information'), making the basic purpose understandable. However, it fails to differentiate from sibling tools like 'analyze' or 'compare' which could potentially also handle pricing data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings (e.g., whether to use 'pricing' for simple lookups versus 'analyze' for detailed cost breakdowns), nor any prerequisites or authentication requirements mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.