Developer Tools MCP Server
Server Details
MCP server providing developer tools data including npm packages, Stack Overflow Q&A, GitHub trending repos, and code snippets for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.7/5 across 7 of 7 tools scored.
Each tool targets a distinct data source and operation type. 'get' tools retrieve specific resources (repos, packages) while 'search' tools query broader indexes (GitHub, arXiv, Scholar, StackOverflow). No functional overlap exists between tools.
Strict adherence to snake_case and verb_noun convention throughout. 'get' prefix consistently used for specific resource retrieval; 'search' prefix consistently used for keyword-based queries. Resource names are specific and unambiguous.
Seven tools is an ideal count for a focused developer information lookup server. The selection covers the essential domains (code repositories, package registries, academic sources, Q&A) without bloat or redundancy.
Covers the core read-only workflows for developer research effectively. Minor gap: asymmetric coverage where GitHub/npm/PyPI have specific 'get' operations while arXiv, Scholar, and StackOverflow lack equivalent 'get by ID' tools, requiring agents to search even when they know the specific identifier.
Available Tools
7 toolsget_github_repoBRead-onlyInspect
Fetch detailed statistics and metadata for a GitHub repository. Returns star count, fork count, open issue count, primary programming language, project description, last updated timestamp, and contributor count. Use for evaluating open-source projects, competitive analysis, or monitoring project health.
| Name | Required | Description | Default |
|---|---|---|---|
| repo | Yes | Repository in format 'owner/repo' (e.g. 'facebook/react', 'kubernetes/kubernetes') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Lists returned data fields (stars, forks, issues, language, description) compensating for missing output schema, but lacks operational details like authentication requirements or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, tight sentence front-loaded with action; em-dash efficiently enumerates return values without fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple tool but incomplete due to missing parameter format specification; appropriately covers return values given lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Fails to compensate for 0% schema description coverage; does not specify expected 'repo' parameter format (e.g., 'owner/repo' vs full URL).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb and resource ('Get GitHub repository stats'), implicitly distinguishes from search_github by focusing on specific repository data retrieval rather than search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus search_github or other alternatives, nor when-not-to-use conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_npm_packageCRead-onlyInspect
Look up Node.js package information from NPM registry. Returns latest version, download statistics (weekly/monthly), dependency list, package description, license, and GitHub link. Use for evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.
| Name | Required | Description | Default |
|---|---|---|---|
| package_name | Yes | NPM package name exactly as published (e.g. 'express', 'react', 'lodash', '@babel/core') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description only lists return fields but omits critical behavioral traits: error handling for missing packages, rate limits, authentication requirements, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise and front-loaded with action verb, though arguably too minimal given the information gaps in other dimensions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple tool with no output schema—compensates by listing return fields—but incomplete due to missing parameter semantics and behavioral details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for package_name parameter, yet description fails to compensate by explaining expected format (e.g., scoped vs unscoped packages, case sensitivity).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (npm package details), listing exact data fields returned (version, downloads, dependencies, description), though could better distinguish from get_github_repo sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use versus alternatives (e.g., get_github_repo for repository metadata) or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pypi_packageCRead-onlyInspect
Retrieve Python package information from PyPI (Python Package Index). Returns current version, download counts, dependencies, release history, package homepage, and PyPI page URL. Use for Python library evaluation, dependency analysis, or checking package quality metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| package_name | Yes | PyPI package name as listed in registry (e.g. 'numpy', 'django', 'flask', 'pandas') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Identifies the external data source (PyPI) but omits error handling, rate limits, return structure, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief and front-loaded, but arguably too sparse given the lack of schema documentation that needs compensation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for a tool with zero schema descriptions and no output schema; fails to explain what 'details' are returned or parameter expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and the description fails to compensate, offering no guidance on package_name format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the core action (get details) and domain (PyPI/Python), distinguishing it from npm/github siblings, though 'details' remains vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lacks explicit when-to-use guidance; only implicit differentiation via 'PyPI' mention versus sibling search/get tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_arxivCRead-onlyInspect
Search arXiv for academic papers in computer science, machine learning, AI, physics, and mathematics. Returns paper titles, authors, abstracts, submission dates, and direct PDF download links. Use for researching algorithms, ML techniques, or emerging CS topics.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Research topic in CS/ML/physics (e.g. 'transformer architectures', 'distributed systems', 'quantum algorithms') | |
| max_results | No | Papers to return (default 10, suitable for focused research) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fails to disclose rate limits, authentication requirements, result format, or error behaviors beyond the basic search scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no filler; every word contributes to understanding the tool's scope and domain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter search tool, though lacking return value description (compounded by absence of output schema) and parameter details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and the description adds no context about what constitutes a valid query string or the constraints/behavior of max_results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (search) and resource (arXiv papers), with domain specificity (CS, ML, AI, physics, math) that implicitly distinguishes it from sibling search tools like Google Scholar or GitHub.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists relevant subject domains but provides no explicit guidance on when to choose this over search_google_scholar or other academic sources, and no exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_githubCRead-onlyInspect
Search GitHub repositories by keyword to discover code, projects, and libraries. Returns matching repositories with star count, description, language, and URL. Use for finding libraries, examples, or competitive projects in specific domains.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search keywords or project name (e.g. 'web framework', 'authentication library', 'data visualization') | |
| max_results | No | Number of repository results to return (default 10, up to 100 for broad searches) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided and description adds no behavioral context (auth, rate limits, result format, empty result handling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with no fluff, though arguably too minimal given missing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for the complexity; lacks parameter details, output description, and differentiation from sibling tools despite 0% schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description mentions 'keyword' hinting at query param but ignores max_results entirely.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States basic function (search repos by keyword) but fails to distinguish from sibling get_github_repo (search vs. fetch specific).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives like get_github_repo or other search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_google_scholarCRead-onlyInspect
Search Google Scholar for computer science research papers, citations, and academic publications. Returns paper title, authors, publication details, citation count, and link to paper. Use for finding research on CS topics, reviewing state-of-the-art, or citation tracking.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Computer science research topic (e.g. 'natural language processing', 'distributed consensus algorithms') | |
| max_results | No | Maximum papers to return (default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden but reveals only the basic action, omitting rate limits, result format, and pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is appropriately terse and front-loaded, though excessive brevity given missing structured metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient for the complexity gap: lacks output schema, parameter descriptions, and annotations, yet description provides no details on return values or search capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% yet description fails to compensate by explaining query syntax or max_results constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Search) and target resource (Google Scholar for academic papers), but fails to differentiate from sibling search_arxiv.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus search_arxiv or other search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stackoverflowCRead-onlyInspect
Search Stack Overflow Q&A platform for programming questions, solutions, and code examples. Returns matching questions, answer count, view count, accepted answer snippet, tags, and link to full discussion. Use for troubleshooting, code examples, or finding solutions to common problems.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Programming problem or question (e.g. 'how to merge arrays in javascript', 'python asyncio example') | |
| max_results | No | Number of Q&A results to retrieve (default 10, higher for comprehensive answers) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden but only states basic search capability without disclosing return format, rate limits, or result ranking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise and front-loaded, though arguably too minimal given the lack of schema documentation and presence of similar sibling tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Insufficient given no output schema exists and sibling tools overlap significantly; misses opportunity to clarify unique value of StackOverflow community answers versus code repositories.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage and description fails to compensate—no mention of max_results default (10) or query formatting expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (search) and resource (StackOverflow Q&A), but fails to differentiate from sibling search_github for programming queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus search_github or other programming-related siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!