check_injection
Analyze code for injection vulnerabilities to identify security risks before deployment.
Instructions
Check Injection Endpoint
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | ||
| language | No |
Analyze code for injection vulnerabilities to identify security risks before deployment.
Check Injection Endpoint
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | ||
| language | No |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure yet reveals nothing about behavioral traits. It does not state whether the tool executes the provided code (destructive) or performs static analysis (safe), what the return format is, or whether there are rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (3 words), this represents under-specification rather than efficient conciseness. No sentences earn their place because no meaningful information is conveyed beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Completely inadequate for a 2-parameter security analysis tool with 0% schema coverage and no output schema. The description leaves critical gaps regarding input formats, execution safety, and return values that an agent needs to invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (both 'code' and 'language' parameters lack descriptions). The description fails to compensate by not explaining whether 'code' is the source to analyze or the payload to inject, or what valid 'language' values are accepted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Check Injection Endpoint' restates the tool name (tautology) with the ambiguous addition of 'Endpoint.' It fails to specify what type of injection (SQL, command, code) is detected, what 'Endpoint' refers to (the target or the tool itself), or how it differs from sibling security tools like check_dependencies or check_headers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., check_dependencies vs. check_injection), prerequisites for the code parameter, or safety considerations when analyzing untrusted input.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/UPinar/contrastapi'
If you have feedback or need assistance with the MCP directory API, please join our Discord server