Skip to main content
Glama

npm

Server Details

npm MCP — wraps the npm Registry API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-npm
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_downloads retrieves download statistics, get_package fetches detailed package metadata, and search_packages performs registry searches. There is no overlap in functionality, making tool selection unambiguous for an agent.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern (get_downloads, get_package, search_packages) using snake_case. The verbs 'get' and 'search' are appropriately descriptive and applied consistently across the tool set.

Tool Count4/5

With only 3 tools, the server is minimal but well-scoped for basic npm registry interactions. It covers key operations (search, metadata retrieval, download stats), though it might feel slightly thin for broader npm management tasks like publishing or version management.

Completeness3/5

The tools provide good read-only coverage for exploring npm packages (search, metadata, downloads), but there are notable gaps for common npm operations like publishing packages, managing versions, or accessing user-specific data. This limits the server to querying rather than full package lifecycle management.

Available Tools

3 tools
get_downloadsAInspect

Get the download count for an npm package over a given period (e.g., last-week, last-month, last-day).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesPackage name
periodNoDownload period: last-day, last-week (default), last-month, or a date range like 2024-01-01:2024-06-30
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool's function but lacks details on authentication needs, rate limits, error conditions, or response format. While it mentions the period parameter, it doesn't explain behavioral aspects like what happens with invalid periods or how results are structured.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that clearly communicates the tool's purpose with zero wasted words. It's appropriately sized for a simple read operation and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with 2 parameters and 100% schema coverage, the description is adequate but has gaps. Without annotations or output schema, it should ideally mention response format or data structure. The description covers basic functionality but lacks completeness for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by mentioning period examples, but doesn't provide additional semantic context about parameter interactions or constraints. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the download count') and resource ('for an npm package') with scope ('over a given period'). It distinguishes from sibling tools like 'get_package' (likely general package info) and 'search_packages' (search functionality) by focusing specifically on download metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving download statistics with time-based filtering, but provides no explicit guidance on when to use this tool versus alternatives like 'get_package' (which might include download data) or 'search_packages'. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packageBInspect

Get detailed metadata for a specific npm package: version, description, dependencies, homepage, and repository.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesExact package name (e.g., "express", "lodash")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation ('Get'), but doesn't mention permissions, rate limits, error conditions, or what happens if the package doesn't exist. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes specific examples of metadata fields. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema, no annotations), the description is minimally adequate. It explains what metadata is returned but doesn't cover behavioral aspects like error handling or performance characteristics. The absence of an output schema means the description should ideally mention return format, but it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'name' fully documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('detailed metadata for a specific npm package'), and lists specific metadata fields (version, description, dependencies, homepage, repository). However, it doesn't explicitly distinguish this from sibling tools like 'get_downloads' or 'search_packages', which likely serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_packages' or 'get_downloads'. It doesn't mention prerequisites, exclusions, or comparative use cases, leaving the agent to infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_packagesBInspect

Search the npm registry for packages. Returns name, description, latest version, publish date, and publisher.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (default 10, max 50)
queryYesSearch query string
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the return fields (name, description, etc.), which is helpful, but doesn't cover important aspects like rate limits, authentication needs, error handling, or pagination behavior for a search operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes key return information. Every word earns its place with zero waste, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides basic purpose and return fields but lacks details on behavioral traits, usage context, and error handling. It's minimally adequate but has clear gaps given the tool's complexity and lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, such as search syntax examples or how the query interacts with the npm registry. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search the npm registry for packages') and resource ('packages'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'get_package' which might retrieve specific package details rather than search across packages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives like 'get_package' or 'get_downloads'. The description lacks context about use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.