Skip to main content
Glama

developer-tools-mcp-server

Server Details

Search GitHub, npm, PyPI, StackOverflow, ArXiv from one MCP — built for coding agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are generally distinct with clear get vs search categories. However, search_arxiv and search_google_scholar both target academic papers, which may cause confusion for agents needing to choose between them.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case (e.g., get_github_repo, search_arxiv), making the set predictable and easy to navigate.

Tool Count5/5

Seven tools cover a well-scoped set of developer needs—package info, repo data, and searches across code, Q&A, and papers—without being too many or too few.

Completeness4/5

The set covers common developer lookup operations, but missing search for npm/PyPI packages and GitHub code search are minor gaps that agents can work around.

Available Tools

7 tools
get_github_repoA
Read-only
Inspect

Fetch detailed statistics and metadata for a GitHub repository. Returns star count, fork count, open issue count, primary programming language, project description, last updated timestamp, and contributor count. Use for evaluating open-source projects, competitive analysis, or monitoring project health.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository in format 'owner/repo' (e.g. 'facebook/react', 'kubernetes/kubernetes')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the description's claim of fetching data aligns. However, it does not disclose rate limits, authentication needs, or error behaviors beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loading the primary function and listing key outputs and use cases without extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With only one parameter and no output schema, the description adequately covers the tool's purpose and return information. It lacks error handling details but is sufficient for a simple fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides a detailed description for the 'repo' parameter (owner/repo format). The description adds no additional semantic meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches detailed statistics and metadata for a GitHub repository, listing specific data points and use cases. It distinguishes from siblings like get_npm_package by focusing on GitHub repos.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for evaluating projects but does not explicitly contrast with siblings like search_github, which is for searching rather than fetching details. No when-to-use or when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_npm_packageA
Read-only
Inspect

Look up Node.js package information from NPM registry. Returns latest version, download statistics (weekly/monthly), dependency list, package description, license, and GitHub link. Use for evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.

ParametersJSON Schema
NameRequiredDescriptionDefault
package_nameYesNPM package name exactly as published (e.g. 'express', 'react', 'lodash', '@babel/core')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. Description adds detail about output (version, stats, deps, license, link) without contradicting annotations. Adds behavioral context beyond structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. Every sentence adds value: first states action and data fields, second provides use cases. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description adequately explains return fields. For a simple lookup tool with annotations covering safety and open-world hints, this is complete enough for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single parameter. Description adds concrete examples ('express', 'react', '@babel/core') clarifying usage beyond schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Look up Node.js package information from NPM registry' and enumerates specific data fields (latest version, download stats, dependencies, etc.). It distinguishes from siblings like get_pypi_package and get_github_repo by focusing on NPM.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: 'evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.' Does not explicitly mention when not to use it, but sibling context implies alternatives for other ecosystems.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pypi_packageA
Read-only
Inspect

Retrieve Python package information from PyPI (Python Package Index). Returns current version, download counts, dependencies, release history, package homepage, and PyPI page URL. Use for Python library evaluation, dependency analysis, or checking package quality metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
package_nameYesPyPI package name as listed in registry (e.g. 'numpy', 'django', 'flask', 'pandas')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, indicating a safe, read-only operation without side effects. The description does not add further behavioral context (e.g., rate limits, data freshness, or pagination), so it adds no extra value beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loading the primary purpose and return data, followed by use cases. Every sentence adds value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description adequately covers the return data and use context. It lacks details like whether the data is live or cached, but overall completeness is high for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter, with a description including examples. The description adds context about the return data but does not enhance understanding of the parameter itself beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves Python package information from PyPI, listing specific returned data (version, download counts, dependencies, etc.). It distinguishes from siblings like get_github_repo and get_npm_package by specifying the registry and data type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases ('Python library evaluation, dependency analysis, or checking package quality metrics'), which guides when to use it. However, it does not explicitly state when not to use it or mention alternatives, though sibling tool names imply other registries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_arxivA
Read-only
Inspect

Search arXiv for academic papers in computer science, machine learning, AI, physics, and mathematics. Returns paper titles, authors, abstracts, submission dates, and direct PDF download links. Use for researching algorithms, ML techniques, or emerging CS topics.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesResearch topic in CS/ML/physics (e.g. 'transformer architectures', 'distributed systems', 'quantum algorithms')
max_resultsNoPapers to return (default 10, suitable for focused research)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses returned fields (titles, authors, abstracts, dates, PDF links) and scope (arXiv papers). With readOnlyHint true, safety is clear. No contradictions. Could mention rate limits or authentication, but not critical for this tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loaded with action and scope, followed by usage guidance. No unnecessary words, every phrase adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (2 params, no output schema, clear annotations), the description covers purpose, returns, and usage context completely. No missing essential information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters fully with descriptions (100% coverage). The description adds no new meaning beyond the schema, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly specifies 'Search arXiv' for academic papers in relevant fields, distinguishing it from siblings like search_google_scholar which covers broader sources. The verb-resource combination is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States 'Use for researching algorithms, ML techniques, or emerging CS topics', providing clear context. However, does not explicitly contrast with sibling tools or specify when not to use, which would have elevated the score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_githubA
Read-only
Inspect

Search GitHub repositories by keyword to discover code, projects, and libraries. Returns matching repositories with star count, description, language, and URL. Use for finding libraries, examples, or competitive projects in specific domains.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch keywords or project name (e.g. 'web framework', 'authentication library', 'data visualization')
max_resultsNoNumber of repository results to return (default 10, up to 100 for broad searches)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and openWorldHint, reducing burden. Description adds return fields (star count, description, language, URL) but omits potential behaviors like ordering, pagination, or rate limits. Adds marginal value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with action and result summary. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately explains return structure. Covers purpose, inputs, and outputs. Could mention result ordering or default max_results limits, but still fairly complete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so description adds minimal extra meaning. The description's 'by keyword' aligns with the query parameter, but no further semantic detail beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Search GitHub repositories by keyword', specifying the action and resource. It distinguishes from sibling tools like get_github_repo (single repo) and other platform searches, though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage suggestion: 'Use for finding libraries, examples, or competitive projects in specific domains.' Implicitly sets context for discovery vs. known item retrieval, though lacks explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_google_scholarA
Read-only
Inspect

Search Google Scholar for computer science research papers, citations, and academic publications. Returns paper title, authors, publication details, citation count, and link to paper. Use for finding research on CS topics, reviewing state-of-the-art, or citation tracking.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesComputer science research topic (e.g. 'natural language processing', 'distributed consensus algorithms')
max_resultsNoMaximum papers to return (default 10)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already label the tool as read-only (readOnlyHint=true). The description adds value by specifying returned fields (title, authors, etc.) and that it searches Google Scholar, which is non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences front-loaded with the core purpose, followed by return details and usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema, the description adequately explains return fields. It lacks pagination or error details, but for a simple search tool with two parameters, it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline 3 is appropriate. The description does not add extra parameter details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies 'Search Google Scholar for computer science research papers' and lists returned fields, distinguishing it from sibling search tools like search_arxiv by mentioning the specific repository and domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'finding research on CS topics, reviewing state-of-the-art, or citation tracking'. It does not mention when not to use or alternatives, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_stackoverflowA
Read-only
Inspect

Search Stack Overflow Q&A platform for programming questions, solutions, and code examples. Returns matching questions, answer count, view count, accepted answer snippet, tags, and link to full discussion. Use for troubleshooting, code examples, or finding solutions to common problems.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesProgramming problem or question (e.g. 'how to merge arrays in javascript', 'python asyncio example')
max_resultsNoNumber of Q&A results to retrieve (default 10, higher for comprehensive answers)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds value beyond annotations by listing specific return fields (answer count, view count, accepted answer snippet, tags, link). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first identifies action/resource, second details return fields and use cases. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description lists key return fields (questions, answer count, etc.), which is sufficient for a simple tool. Lacks mention of ordering or pagination, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; both parameters have descriptions. Description does not add new meaning beyond the schema, so baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Search' and the resource 'Stack Overflow Q&A platform'. Distinguishes from sibling tools like search_github and search_arxiv by specifying programming Q&A.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use cases like troubleshooting and code examples, but lacks explicit exclusion scenarios or comparisons with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources