Skip to main content
Glama

Server Details

MCP server providing developer tools data including npm packages, Stack Overflow Q&A, GitHub trending repos, and code snippets for AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.7/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct data source and operation type. 'get' tools retrieve specific resources (repos, packages) while 'search' tools query broader indexes (GitHub, arXiv, Scholar, StackOverflow). No functional overlap exists between tools.

Naming Consistency5/5

Strict adherence to snake_case and verb_noun convention throughout. 'get' prefix consistently used for specific resource retrieval; 'search' prefix consistently used for keyword-based queries. Resource names are specific and unambiguous.

Tool Count5/5

Seven tools is an ideal count for a focused developer information lookup server. The selection covers the essential domains (code repositories, package registries, academic sources, Q&A) without bloat or redundancy.

Completeness4/5

Covers the core read-only workflows for developer research effectively. Minor gap: asymmetric coverage where GitHub/npm/PyPI have specific 'get' operations while arXiv, Scholar, and StackOverflow lack equivalent 'get by ID' tools, requiring agents to search even when they know the specific identifier.

Available Tools

7 tools
get_github_repoB
Read-only
Inspect

Fetch detailed statistics and metadata for a GitHub repository. Returns star count, fork count, open issue count, primary programming language, project description, last updated timestamp, and contributor count. Use for evaluating open-source projects, competitive analysis, or monitoring project health.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository in format 'owner/repo' (e.g. 'facebook/react', 'kubernetes/kubernetes')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lists returned data fields (stars, forks, issues, language, description) compensating for missing output schema, but lacks operational details like authentication requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, tight sentence front-loaded with action; em-dash efficiently enumerates return values without fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple tool but incomplete due to missing parameter format specification; appropriately covers return values given lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Fails to compensate for 0% schema description coverage; does not specify expected 'repo' parameter format (e.g., 'owner/repo' vs full URL).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb and resource ('Get GitHub repository stats'), implicitly distinguishes from search_github by focusing on specific repository data retrieval rather than search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus search_github or other alternatives, nor when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_npm_packageC
Read-only
Inspect

Look up Node.js package information from NPM registry. Returns latest version, download statistics (weekly/monthly), dependency list, package description, license, and GitHub link. Use for evaluating JavaScript libraries, checking maintenance status, or reviewing package popularity.

ParametersJSON Schema
NameRequiredDescriptionDefault
package_nameYesNPM package name exactly as published (e.g. 'express', 'react', 'lodash', '@babel/core')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description only lists return fields but omits critical behavioral traits: error handling for missing packages, rate limits, authentication requirements, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded with action verb, though arguably too minimal given the information gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple tool with no output schema—compensates by listing return fields—but incomplete due to missing parameter semantics and behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for package_name parameter, yet description fails to compensate by explaining expected format (e.g., scoped vs unscoped packages, case sensitivity).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get) and resource (npm package details), listing exact data fields returned (version, downloads, dependencies, description), though could better distinguish from get_github_repo sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus alternatives (e.g., get_github_repo for repository metadata) or when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pypi_packageC
Read-only
Inspect

Retrieve Python package information from PyPI (Python Package Index). Returns current version, download counts, dependencies, release history, package homepage, and PyPI page URL. Use for Python library evaluation, dependency analysis, or checking package quality metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
package_nameYesPyPI package name as listed in registry (e.g. 'numpy', 'django', 'flask', 'pandas')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Identifies the external data source (PyPI) but omits error handling, rate limits, return structure, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief and front-loaded, but arguably too sparse given the lack of schema documentation that needs compensation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient for a tool with zero schema descriptions and no output schema; fails to explain what 'details' are returned or parameter expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage and the description fails to compensate, offering no guidance on package_name format or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the core action (get details) and domain (PyPI/Python), distinguishing it from npm/github siblings, though 'details' remains vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lacks explicit when-to-use guidance; only implicit differentiation via 'PyPI' mention versus sibling search/get tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_arxivC
Read-only
Inspect

Search arXiv for academic papers in computer science, machine learning, AI, physics, and mathematics. Returns paper titles, authors, abstracts, submission dates, and direct PDF download links. Use for researching algorithms, ML techniques, or emerging CS topics.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesResearch topic in CS/ML/physics (e.g. 'transformer architectures', 'distributed systems', 'quantum algorithms')
max_resultsNoPapers to return (default 10, suitable for focused research)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose rate limits, authentication requirements, result format, or error behaviors beyond the basic search scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with no filler; every word contributes to understanding the tool's scope and domain.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple two-parameter search tool, though lacking return value description (compounded by absence of output schema) and parameter details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage and the description adds no context about what constitutes a valid query string or the constraints/behavior of max_results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (search) and resource (arXiv papers), with domain specificity (CS, ML, AI, physics, math) that implicitly distinguishes it from sibling search tools like Google Scholar or GitHub.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists relevant subject domains but provides no explicit guidance on when to choose this over search_google_scholar or other academic sources, and no exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_githubC
Read-only
Inspect

Search GitHub repositories by keyword to discover code, projects, and libraries. Returns matching repositories with star count, description, language, and URL. Use for finding libraries, examples, or competitive projects in specific domains.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch keywords or project name (e.g. 'web framework', 'authentication library', 'data visualization')
max_resultsNoNumber of repository results to return (default 10, up to 100 for broad searches)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided and description adds no behavioral context (auth, rate limits, result format, empty result handling).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with no fluff, though arguably too minimal given missing information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient for the complexity; lacks parameter details, output description, and differentiation from sibling tools despite 0% schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage; description mentions 'keyword' hinting at query param but ignores max_results entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States basic function (search repos by keyword) but fails to distinguish from sibling get_github_repo (search vs. fetch specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus alternatives like get_github_repo or other search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_google_scholarC
Read-only
Inspect

Search Google Scholar for computer science research papers, citations, and academic publications. Returns paper title, authors, publication details, citation count, and link to paper. Use for finding research on CS topics, reviewing state-of-the-art, or citation tracking.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesComputer science research topic (e.g. 'natural language processing', 'distributed consensus algorithms')
max_resultsNoMaximum papers to return (default 10)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden but reveals only the basic action, omitting rate limits, result format, and pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately terse and front-loaded, though excessive brevity given missing structured metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient for the complexity gap: lacks output schema, parameter descriptions, and annotations, yet description provides no details on return values or search capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% yet description fails to compensate by explaining query syntax or max_results constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Search) and target resource (Google Scholar for academic papers), but fails to differentiate from sibling search_arxiv.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus search_arxiv or other search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_stackoverflowC
Read-only
Inspect

Search Stack Overflow Q&A platform for programming questions, solutions, and code examples. Returns matching questions, answer count, view count, accepted answer snippet, tags, and link to full discussion. Use for troubleshooting, code examples, or finding solutions to common problems.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesProgramming problem or question (e.g. 'how to merge arrays in javascript', 'python asyncio example')
max_resultsNoNumber of Q&A results to retrieve (default 10, higher for comprehensive answers)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden but only states basic search capability without disclosing return format, rate limits, or result ranking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise and front-loaded, though arguably too minimal given the lack of schema documentation and presence of similar sibling tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient given no output schema exists and sibling tools overlap significantly; misses opportunity to clarify unique value of StackOverflow community answers versus code repositories.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage and description fails to compensate—no mention of max_results default (10) or query formatting expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (search) and resource (StackOverflow Q&A), but fails to differentiate from sibling search_github for programming queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus search_github or other programming-related siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources