developer-tools-mcp-server
Server Details
Search GitHub, npm, PyPI, StackOverflow, ArXiv from one MCP — built for coding agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 7 of 7 tools scored.
Tools are generally distinct with clear get vs search categories. However, search_arxiv and search_google_scholar both target academic papers, which may cause confusion for agents needing to choose between them.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., get_github_repo, search_arxiv), making the set predictable and easy to navigate.
Seven tools cover a well-scoped set of developer needs—package info, repo data, and searches across code, Q&A, and papers—without being too many or too few.
The set covers common developer lookup operations, but missing search for npm/PyPI packages and GitHub code search are minor gaps that agents can work around.
Available Tools
7 toolsget_github_repoARead-onlyInspect
Fetch detailed statistics and metadata for a GitHub repository. Returns star count, fork count, open issue count, primary programming language, project description, last updated timestamp, and contributor count. Use for evaluating open-source projects, competitive analysis, or monitoring project health.
| Name | Required | Description | Default |
|---|---|---|---|
| repo | Yes | Repository in format 'owner/repo' (e.g. 'facebook/react', 'kubernetes/kubernetes') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description's claim of fetching data aligns. However, it does not disclose rate limits, authentication needs, or error behaviors beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the primary function and listing key outputs and use cases without extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only one parameter and no output schema, the description adequately covers the tool's purpose and return information. It lacks error handling details but is sufficient for a simple fetch tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides a detailed description for the 'repo' parameter (owner/repo format). The description adds no additional semantic meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches detailed statistics and metadata for a GitHub repository, listing specific data points and use cases. It distinguishes from siblings like get_npm_package by focusing on GitHub repos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for evaluating projects but does not explicitly contrast with siblings like search_github, which is for searching rather than fetching details. No when-to-use or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_npm_packageARead-onlyInspect
Look up Node.js package information from NPM registry. Returns latest version, download statistics (weekly/monthly), dependency list, package description, license, and GitHub link. Use for evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.
| Name | Required | Description | Default |
|---|---|---|---|
| package_name | Yes | NPM package name exactly as published (e.g. 'express', 'react', 'lodash', '@babel/core') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. Description adds detail about output (version, stats, deps, license, link) without contradicting annotations. Adds behavioral context beyond structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. Every sentence adds value: first states action and data fields, second provides use cases. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description adequately explains return fields. For a simple lookup tool with annotations covering safety and open-world hints, this is complete enough for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single parameter. Description adds concrete examples ('express', 'react', '@babel/core') clarifying usage beyond schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Look up Node.js package information from NPM registry' and enumerates specific data fields (latest version, download stats, dependencies, etc.). It distinguishes from siblings like get_pypi_package and get_github_repo by focusing on NPM.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases: 'evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.' Does not explicitly mention when not to use it, but sibling context implies alternatives for other ecosystems.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pypi_packageARead-onlyInspect
Retrieve Python package information from PyPI (Python Package Index). Returns current version, download counts, dependencies, release history, package homepage, and PyPI page URL. Use for Python library evaluation, dependency analysis, or checking package quality metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| package_name | Yes | PyPI package name as listed in registry (e.g. 'numpy', 'django', 'flask', 'pandas') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, indicating a safe, read-only operation without side effects. The description does not add further behavioral context (e.g., rate limits, data freshness, or pagination), so it adds no extra value beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the primary purpose and return data, followed by use cases. Every sentence adds value with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter and no output schema, the description adequately covers the return data and use context. It lacks details like whether the data is live or cached, but overall completeness is high for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter, with a description including examples. The description adds context about the return data but does not enhance understanding of the parameter itself beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves Python package information from PyPI, listing specific returned data (version, download counts, dependencies, etc.). It distinguishes from siblings like get_github_repo and get_npm_package by specifying the registry and data type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases ('Python library evaluation, dependency analysis, or checking package quality metrics'), which guides when to use it. However, it does not explicitly state when not to use it or mention alternatives, though sibling tool names imply other registries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_arxivARead-onlyInspect
Search arXiv for academic papers in computer science, machine learning, AI, physics, and mathematics. Returns paper titles, authors, abstracts, submission dates, and direct PDF download links. Use for researching algorithms, ML techniques, or emerging CS topics.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Research topic in CS/ML/physics (e.g. 'transformer architectures', 'distributed systems', 'quantum algorithms') | |
| max_results | No | Papers to return (default 10, suitable for focused research) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses returned fields (titles, authors, abstracts, dates, PDF links) and scope (arXiv papers). With readOnlyHint true, safety is clear. No contradictions. Could mention rate limits or authentication, but not critical for this tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loaded with action and scope, followed by usage guidance. No unnecessary words, every phrase adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (2 params, no output schema, clear annotations), the description covers purpose, returns, and usage context completely. No missing essential information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters fully with descriptions (100% coverage). The description adds no new meaning beyond the schema, meeting the baseline but not exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly specifies 'Search arXiv' for academic papers in relevant fields, distinguishing it from siblings like search_google_scholar which covers broader sources. The verb-resource combination is unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Use for researching algorithms, ML techniques, or emerging CS topics', providing clear context. However, does not explicitly contrast with sibling tools or specify when not to use, which would have elevated the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_githubARead-onlyInspect
Search GitHub repositories by keyword to discover code, projects, and libraries. Returns matching repositories with star count, description, language, and URL. Use for finding libraries, examples, or competitive projects in specific domains.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search keywords or project name (e.g. 'web framework', 'authentication library', 'data visualization') | |
| max_results | No | Number of repository results to return (default 10, up to 100 for broad searches) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and openWorldHint, reducing burden. Description adds return fields (star count, description, language, URL) but omits potential behaviors like ordering, pagination, or rate limits. Adds marginal value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and result summary. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return structure. Covers purpose, inputs, and outputs. Could mention result ordering or default max_results limits, but still fairly complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so description adds minimal extra meaning. The description's 'by keyword' aligns with the query parameter, but no further semantic detail beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Search GitHub repositories by keyword', specifying the action and resource. It distinguishes from sibling tools like get_github_repo (single repo) and other platform searches, though not explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit usage suggestion: 'Use for finding libraries, examples, or competitive projects in specific domains.' Implicitly sets context for discovery vs. known item retrieval, though lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_google_scholarARead-onlyInspect
Search Google Scholar for computer science research papers, citations, and academic publications. Returns paper title, authors, publication details, citation count, and link to paper. Use for finding research on CS topics, reviewing state-of-the-art, or citation tracking.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Computer science research topic (e.g. 'natural language processing', 'distributed consensus algorithms') | |
| max_results | No | Maximum papers to return (default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already label the tool as read-only (readOnlyHint=true). The description adds value by specifying returned fields (title, authors, etc.) and that it searches Google Scholar, which is non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences front-loaded with the core purpose, followed by return details and usage guidance. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description adequately explains return fields. It lacks pagination or error details, but for a simple search tool with two parameters, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3 is appropriate. The description does not add extra parameter details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies 'Search Google Scholar for computer science research papers' and lists returned fields, distinguishing it from sibling search tools like search_arxiv by mentioning the specific repository and domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases: 'finding research on CS topics, reviewing state-of-the-art, or citation tracking'. It does not mention when not to use or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stackoverflowARead-onlyInspect
Search Stack Overflow Q&A platform for programming questions, solutions, and code examples. Returns matching questions, answer count, view count, accepted answer snippet, tags, and link to full discussion. Use for troubleshooting, code examples, or finding solutions to common problems.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Programming problem or question (e.g. 'how to merge arrays in javascript', 'python asyncio example') | |
| max_results | No | Number of Q&A results to retrieve (default 10, higher for comprehensive answers) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds value beyond annotations by listing specific return fields (answer count, view count, accepted answer snippet, tags, link). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first identifies action/resource, second details return fields and use cases. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description lists key return fields (questions, answer count, etc.), which is sufficient for a simple tool. Lacks mention of ordering or pagination, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; both parameters have descriptions. Description does not add new meaning beyond the schema, so baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Search' and the resource 'Stack Overflow Q&A platform'. Distinguishes from sibling tools like search_github and search_arxiv by specifying programming Q&A.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear use cases like troubleshooting and code examples, but lacks explicit exclusion scenarios or comparisons with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!