Skip to main content
Glama

code-pattern-risk-scanner

Server Details

Cloudflare Workers MCP server: code-pattern-risk-scanner

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lazymac2x/code-pattern-risk-scanner-api
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.7/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation4/5

Tools have distinct purposes but detect_antipatterns and scan_patterns both involve scanning for patterns, which could cause confusion. The descriptions help differentiate (antipatterns vs. specific pattern categories), but some overlap remains.

Naming Consistency4/5

All names use lowercase with underscores, but the pattern varies: most are verb_noun (detect_antipatterns, scan_patterns, suggest_refactor) but risk_score and security_check are not verb-initial, breaking full consistency.

Tool Count5/5

Five tools is well-scoped for a code pattern risk scanner, covering detection, risk calculation, specific scanning, security check, and refactoring suggestions without unnecessary bloat.

Completeness4/5

The tool surface covers key operations for scanning and risk assessment. Minor gaps exist, such as no tool for pattern discovery or configuration management, but core workflows are supported.

Available Tools

5 tools
detect_antipatternsCInspect

Detect antipatterns with optional severity filter

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
languageNo
severityNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. The term 'detect' suggests a read-only analysis, but no explicit statement confirms safety, side effects, or return behavior. The description is too vague to fully inform an agent about the tool's operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no superfluous words, but it is overly terse. It could be expanded to provide more value without significant bloat, so it balances conciseness with completeness at a moderate level.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters, 0% schema coverage, no output schema, and no annotations, the description is severely lacking. It fails to explain what antipatterns are detected, which languages are supported, the output format, or any other operational details, leaving the agent with insufficient information to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to add meaning. The description only mentions 'optional severity filter', which adds minimal context for the severity parameter. The 'code' and 'language' parameters receive no additional explanation, failing to compensate for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects antipatterns and mentions an optional severity filter. It uses a specific verb ('detect') and resource ('antipatterns'), and the sibling tool names (risk_score, scan_patterns, security_check, suggest_refactor) are distinct enough to imply different purposes, though not explicitly differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings. The description only states what it does without contextual recommendations or exclusions, leaving the agent to infer usage without support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

risk_scoreCInspect

Calculate 0-100 risk score with severity breakdown

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
languageNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full responsibility for behavioral disclosure. It only states it 'calculates' a score, missing information on idempotency, side effects, or required permissions. The minimal text adds little transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that is front-loaded with the core action. However, it may be too brief, lacking necessary detail for effective use.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations, output schema, and parameter descriptions, the description is insufficient. It fails to specify input constraints, output details beyond a severity breakdown, or any usage context, making it incomplete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and the description does not explain any parameter. 'code' and 'language' are left entirely to the agent's inference, providing no added meaning beyond the schema's type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates a risk score from 0-100 with a severity breakdown, which distinguishes it from sibling tools like detect_antipatterns or security_check. However, it does not specify what input is required beyond the schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like scan_patterns or security_check. The description implies usage for risk scoring but offers no context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_patternsBInspect

Scan JS/TS code for 26 security and performance patterns

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesJS/TS code snippet to scan (max 50,000 chars)
languageNoLanguage hint: javascript, typescript, jsx, tsx, etc.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description fails to disclose any behavioral traits beyond the scanning action (e.g., whether it's read-only, any side effects, auth requirements, or output format). This is inadequate for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence of 9 words with zero waste. It efficiently conveys the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two parameters and no output schema, the description is somewhat complete but does not specify the return format or any additional behavior. Given the absence of output schema, this is a moderate gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters having descriptions. The description adds minimal extra meaning beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (scan) and the target (JS/TS code for 26 security and performance patterns). It distinguishes from sibling tools like detect_antipatterns and security_check by specifying the count and types of patterns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like detect_antipatterns, risk_score, or security_check. The description lacks context about appropriate scenarios or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

security_checkCInspect

Security-focused scan returning vulnerable | at-risk | secure status

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
languageNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that it returns a categorical status, but lacks information about whether it modifies data, requires special permissions, or has side effects. With no annotations, this is a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that quickly conveys the tool's purpose, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description does not fully explain return values beyond the three statuses, and omits context on when to use or prerequisites. Given the tool's simplicity, more completeness is expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not provide any additional meaning about the parameters 'code' and 'language' beyond their names. Schema coverage is 0%, so parameter semantics are entirely missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states it performs a 'Security-focused scan' returning one of three statuses, clearly indicating its function. However, it does not explicitly differentiate from sibling tools like detect_antipatterns or scan_patterns, leaving some ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use security_check versus alternatives such as risk_score or scan_patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_refactorCInspect

Get prioritized step-by-step refactoring suggestions

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
languageNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It mentions 'prioritized step-by-step' but does not detail what that entails (e.g., criteria for prioritization, whether it mutates data, or any side effects). The agent gains little behavioral insight beyond the tool's basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence. However, it sacrifices informative content for brevity. It could be expanded without losing conciseness, e.g., by mentioning the type of code to be refactored.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, annotations, and parameter details, the description is incomplete. It does not explain what the output looks like (list of steps?), how prioritization works, or whether language is required. The tool's functionality is vaguely defined.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain the parameters ('code' and 'language'). It fails to add meaning—e.g., 'code' as the source code to refactor, or 'language' as the programming language. The agent must infer from names alone, which is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('prioritized step-by-step refactoring suggestions'), clearly indicating the tool's purpose. However, it does not explicitly distinguish itself from sibling tools like detect_antipatterns or risk_score, though the focus on refactoring is distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., for refactoring suggestions vs. detecting anti-patterns). The description implies usage for code refactoring but offers no context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.