Skip to main content
Glama
piiiico

proof-of-commitment

lookup_npm_package

Retrieve a behavioral commitment profile for any npm package. Analyze signals like age, download trends, publisher depth, and GitHub activity to vet dependencies and detect potential supply chain risks.

Instructions

Get a behavioral commitment profile for any npm package. Returns real signals: package age, download volume and trend (growing/stable/declining), release consistency, npm publisher count, GitHub contributor count, and linked GitHub activity.

Supply chain attacks target packages with low publisher depth (few people with npm publish access). Behavioral signals reveal what download counts hide.

Useful for: vetting dependencies, identifying abandonware, due diligence on open-source packages. Examples: "langchain", "@anthropic-ai/sdk", "express", "litellm"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
packageYesnpm package name. Examples: "langchain", "@anthropic-ai/sdk", "express". Scoped packages need the @ prefix.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full burden. It explains the tool returns behavioral signals and mentions supply chain attack context. However, it does not explicitly state that the operation is read-only, nor does it disclose any side effects, authorization needs, or rate limits. While informative, it lacks explicit safety disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with the action, and contains only relevant information. It uses clear language and provides concrete examples (e.g., 'langchain', 'express'). Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description thoroughly explains the return signals (package age, download trend, release consistency, etc.) and the purpose. It covers use cases and includes examples. For a simple one-parameter tool, this is complete and sufficient for an agent to understand expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There is only one parameter ('package') with full schema description coverage (100%). The schema already includes examples and instructions for scoped packages. The tool description does not add further parameter-level meaning beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a behavioral commitment profile for any npm package.' It lists specific return signals (package age, download volume, trend, etc.) and distinguishes from sibling tools by focusing on npm packages, making it unique among tools like lookup_github_repo and lookup_pypi_package.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'vetting dependencies, identifying abandonware, due diligence on open-source packages.' While it does not mention when not to use or alternative tools, the sibling list and context make the appropriate usage clear. A score of 4 reflects good guidance without explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/piiiico/proof-of-commitment'

If you have feedback or need assistance with the MCP directory API, please join our Discord server