DepShield MCP
Server Quality Checklist
- Disambiguation4/5
Tools are generally distinct with clear boundaries: audit_project targets manifest files, check_dependency is for pre-install validation, check_npm_health focuses on maintenance metrics, and deep_scan examines transitive dependencies. Minor overlap exists between check_dependency and check_npm_health (both examine single packages), but their purposes (security vs health) are different enough to guide selection.
Naming Consistency4/5Most tools follow a clear verb_noun pattern (audit_project, check_dependency, find_safe_version, get_advisory_detail, suggest_alternative). However, 'deep_scan' breaks the convention by using an adjective_noun structure rather than a verb-led name like 'scan_dependencies' or 'analyze_transitive_deps'. 'check_npm_health' is consistent but domain-specific (npm) while others are generic.
Tool Count5/5Seven tools is an ideal count for this domain. The set covers the full workflow: project scanning (audit_project), package vetting (check_dependency, check_npm_health, deep_scan), remediation (find_safe_version, suggest_alternative), and investigation (get_advisory_detail). No tool feels redundant or filler.
Completeness4/5Strong coverage of the dependency security lifecycle including vulnerability detection, health assessment, transitive dependency analysis, and remediation strategies. Minor gaps include the lack of an 'apply_fix' or 'update_manifest' tool to automatically remediate findings, and no SBOM generation capability, but agents can work around these with the existing tools.
Average 3.4/5 across 7 of 7 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to specify what constitutes 'full details,' error handling for invalid IDs, rate limits, or whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is appropriately sized for a simple lookup tool. The action is front-loaded, though 'full details' is vague and could be replaced with specific return value hints given the lack of output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with 100% schema coverage, the description is minimally adequate. However, with no output schema provided, the failure to specify what 'full details' includes (severity, description, affected versions, references) leaves a significant gap in understanding the tool's utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter already well-documented via examples (GHSA-jf85-cpcp-j695, CVE-2021-23337). The description mentions 'CVE, GHSA, etc' which aligns with the schema but adds no additional semantic context beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('security advisory') with specific examples (CVE, GHSA) that hint at the expected input format. However, it does not explicitly differentiate from sibling tools like check_dependency that might also return vulnerability information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives (e.g., when to use get_advisory_detail vs check_dependency or deep_scan). No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only minimally discloses behavior. It mentions returning a 'full audit report' but fails to clarify if this is a read-only operation, whether it makes external network calls to vulnerability databases, rate limits, or the output format (JSON vs formatted string).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with the core action ('Scan') and specific file types, followed by return value. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with 100% schema coverage, the description covers the basic operation adequately. However, given the lack of annotations and output schema, it omits important context about whether the tool is read-only, requires authentication, or what structure the 'full audit report' takes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'package.json or requirements.txt' which mirrors the schema's filePath description, and implies dependency scanning which relates to includeDevDependencies. It adds minimal semantic value beyond what the schema already clearly documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool scans package.json or requirements.txt files for dependency vulnerabilities and returns a full audit report. It identifies specific resource types and implies file-level scope, though it doesn't explicitly differentiate from siblings like check_dependency (which suggests single-dependency checking).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus sibling tools like check_dependency, deep_scan, or find_safe_version. No mention of prerequisites (e.g., file must exist) or when a user might prefer individual dependency checks over a full file scan.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it specifies the selection criteria (newest version with zero vulnerabilities), it fails to disclose error handling (what happens when no safe version exists), return format, or the vulnerability database source.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 11 words with zero redundancy. Every word contributes essential information about the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple two-parameter input schema with complete coverage, the description is minimally adequate. However, lacking both annotations and an output schema, it should ideally disclose error states (e.g., package not found, no safe version available) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters fully documented. The description maps the concept of 'package' to the 'name' parameter but does not add semantics beyond the schema, such as explaining the ecosystem default behavior or valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (find), resource (package version), and constraint (zero known vulnerabilities). However, it does not explicitly differentiate from siblings like check_dependency or suggest_alternative that also handle security concerns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like check_dependency or suggest_alternative. It omits prerequisites (e.g., exact package name requirements) and does not indicate when no safe version exists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully lists the evaluation criteria (downloads, maintenance, license, deprecation) and output format (0-100 score), compensating for the missing output schema. However, it omits safety indicators (read-only vs destructive), error handling (e.g., non-existent packages), or rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It front-loads the action ('Assess') and uses a colon-delimited list to specify evaluation factors, ending with the output specification ('Scored 0-100'). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema or annotations, the description is reasonably complete. It discloses the scoring range and evaluation dimensions, providing sufficient context for invocation. It could be improved by mentioning error cases (e.g., package not found) or confirming the read-only nature of the operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'name' parameter ('npm package name'). The description adds no additional parameter context, examples, or format constraints beyond what the schema already provides, meeting the baseline expectation for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool assesses package health and trustworthiness using specific metrics (downloads, maintenance, license, deprecation) and produces a numeric score (0-100). However, it does not explicitly differentiate from sibling tool 'check_dependency' which may have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'check_dependency', 'deep_scan', or 'audit_project'. There are no stated prerequisites, exclusions, or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description offers no behavioral context beyond the basic operation. It omits how alternatives are ranked/selected, what criteria are used for recommendations, whether the operation is read-only, or what the response structure looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. The phrase efficiently packs the action, target, and trigger conditions into a compact front-loaded structure where every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter tool, but lacks description of return values or output format given the absence of an output schema. For a recommendation tool, omitting what kind of alternatives data is returned (names only? scores? compatibility info?) leaves a meaningful gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mirrors the schema's examples for the 'reason' parameter ('vulnerable, deprecated, unmaintained') but adds no additional semantic value regarding parameter formats, validation rules, or the optional nature of the reason field.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Find'), clear resource ('alternative packages'), and precise trigger conditions ('vulnerable, deprecated, or unmaintained'). It implicitly distinguishes from sibling 'find_safe_version' by focusing on alternative packages rather than alternative versions of the same package.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use the tool (when packages are vulnerable, deprecated, or unmaintained), but fails to explicitly contrast with 'find_safe_version'—a critical sibling that suggests upgrading the same package rather than replacing it. No guidance on when to prefer one approach over the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It adds valuable specificity about detection patterns (typosquats, newly added deps, low-download packages) beyond generic 'security scan,' but lacks operational details like execution time expectations, API call behavior, or failure modes for invalid packages.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence structure with parenthetical elaboration maximizes information density. Every word earns its place: 'transitive' establishes scope, parenthetical examples clarify 'suspicious patterns' without disrupting flow, and no filler words are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 3-parameter tool with no output schema, covering the core security scanning purpose well. However, given the lack of output schema and annotations, the description should ideally disclose return value structure (vulnerability report format) or execution characteristics (async vs sync, typical duration) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'transitive dependency tree' which conceptually maps to the depth parameter, and 'package' maps to name, but adds no explicit parameter guidance, syntax details, or version selection logic beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Scan' with clear resource 'package's transitive dependency tree' and enumerates specific detection targets (vulnerabilities, typosquats, low-download packages). The terms 'deep' and 'transitive' effectively distinguish this from siblings like check_dependency or audit_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through terminology like 'deep scan' and 'transitive dependency tree' (suggesting comprehensive analysis vs. shallow checks), but provides no explicit guidance on when to choose this over check_dependency or audit_project, nor mentions prerequisites like package availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions vulnerability checking and existence verification, but omits critical details like: whether it returns a report or boolean, what happens when vulnerabilities are found (blocking vs. warning), error handling for non-existent packages, or data sources used.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence front-loads the core action (vulnerability checking + existence verification), while the second provides the critical workflow constraint (MUST call before install). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should ideally characterize the return value (e.g., vulnerability report, risk score, or boolean) to help the agent interpret results. The input side is complete due to schema coverage, but the output side leaves agents uncertain about what data structure to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for all 3 parameters (name, version, ecosystem), establishing a baseline of 3. The description mentions 'package' generally but adds no additional semantic context—such as version syntax requirements or ecosystem-specific behaviors—beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks for 'known vulnerabilities' and verifies registry existence, using specific verbs. It distinguishes from siblings (like audit_project or deep_scan) by focusing on single-package pre-installation verification rather than project-wide auditing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'MUST be called before installing any dependency' directive provides explicit workflow guidance. However, it lacks explicit comparison to siblings like 'audit_project' (for existing dependencies) or 'deep_scan' (for comprehensive analysis), which would help agents choose between similar security tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/devanshkaria88/depshield-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server