Skip to main content
Glama

awesome-mcp.tools

Server Details

Hosted MCP server exposing a catalog of 2,000+ MCP servers as searchable tools — search, compare, top, trending, hot. Refreshed every 6h from the open-source ecosystem. Source: github.com/adw0rd/awesome-mcp-tools-mcp

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 8 of 8 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation on the catalog: listing categories, languages, tags, hot, trending; searching; getting details; comparing. No overlap in functionality.

Naming Consistency5/5

All tool names follow the verb_noun pattern consistently (list_*, get_*, search_*, compare_*), using snake_case throughout.

Tool Count5/5

8 tools is a well-scoped size for a server directory, covering browsing, search, and detail retrieval without being overwhelming.

Completeness4/5

The tool set covers listing, searching, and comparing servers. Missing a direct 'list all servers' endpoint, but search with empty query likely fills that gap.

Available Tools

8 tools
compare_serversCInspect

Compare two MCP servers side-by-side.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugAYesfirst server slug (required)
slugBYessecond server slug (required)

Output Schema

ParametersJSON Schema
NameRequiredDescription
aYes
bYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations and no additional behavioral detail, the description does not disclose side effects, permissions, or output format. 'Compare' implies a read operation but is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise with one sentence, no fluff. However, slightly more detail could improve clarity without significant bloat.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema present, the description is adequate but does not enrich understanding of the comparison output. For a tool with no annotations, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers both parameters with descriptions ('first server slug', 'second server slug'), and schema description coverage is 100%. The description adds no extra meaning beyond what is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it compares two MCP servers side-by-side, distinguishing it from get_server (single server) and search_servers (searching). However, it does not specify what aspects are compared.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like get_server or search_servers. The description does not provide context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_serverAInspect

Get full details (metadata + README) of a single MCP server by its slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesserver slug (required)

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlYes
nameYes
slugYes
tagsNo
starsYes
readmeNo
licenseNo
websiteNo
categoryNo
languageNo
descriptionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states it returns 'full details (metadata + README)', indicating it is a read operation. However, with no annotations, it lacks details on potential errors, permissions, or side effects. It does not contradict any annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with no unnecessary words. It is front-loaded with the key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter retrieval tool with an output schema, the description is adequate. It specifies what is returned (metadata + README) and how to identify the server (by slug). Missing behavioral context like error handling, but acceptable given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with one parameter ('slug') clearly described. The description adds no extra meaning beyond 'by its slug', so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('full details (metadata + README) of a single MCP server'), and the method ('by its slug'). It effectively distinguishes from sibling tools like search_servers (list multiple) and compare_servers (compare).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need full details of a specific server by slug, but it does not explicitly state when to use this tool versus siblings like search_servers or list_hot, nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all MCP server categories with server counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
categoriesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It correctly implies a read-only operation and discloses the output includes server counts. However, it does not mention any potential behavioral traits like authentication or rate limits, though for a simple list tool this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 7-word sentence that is front-loaded and contains no unnecessary words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no annotations, and an output schema exists, the description sufficiently covers the tool's purpose and output. It does not need to explain return values due to the output schema, and the context of siblings is handled by the clear resource specification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so schema description coverage is 100%. The description adds the 'with server counts' detail, providing extra context beyond the schema. Baseline for zero parameters is 4, and no improvement needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all MCP server categories and includes server counts. It uses a specific verb-resource combination ('list categories') and distinguishes from sibling tools (list_languages, list_tags, etc.) by specifying the resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like list_tags or list_languages. The usage is implied by the tool's name and description, but no explicit when-not or alternative references are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_hotCInspect

Featured/hot MCP servers.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNomax results, 1-50 (default 20)

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsYes
totalYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose behavioral traits. It does not mention read-only nature, side effects, auth needs, or rate limits. The brief description gives no insight into behavior beyond listing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (a phrase), which is concise but not wasteful. It could be more structured or informative, but it avoids unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with an output schema, the description is minimally complete. However, it lacks context about what 'hot' means (e.g., algorithm or curation) and does not explain how results are sorted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the limit parameter has a description). The tool description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Featured/hot MCP servers.' indicates the tool lists servers, but 'hot' is ambiguous and does not differentiate from siblings like list_trending or list_categories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as list_trending or search_servers. No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_languagesAInspect

List programming languages of MCP servers with counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
languagesYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It indicates a read operation but does not disclose behavioral traits like authentication needs, pagination, or sorting order. It is adequate but lacks detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is one very short sentence with no wasted words, front-loaded with the key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and an output schema exists, the description sufficiently explains the purpose. It could mention if results are limited or sorted, but for a simple list tool it is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters (schema coverage 100% trivially), so the baseline is 4. The description adds value by explaining the output includes counts, which goes beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists programming languages with counts, using a specific verb 'list' and resource 'languages'. It naturally distinguishes from siblings like 'list_categories' which list categories, not languages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when one needs an overview of programming languages used by MCP servers, but it does not explicitly state when to use it instead of alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsAInspect

List all MCP server tags with counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
tagsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden of behavioral disclosure. It clearly indicates a read-only operation ('List all') with no side effects. Though it could explicitly state 'read-only' or 'non-destructive', the listing verb strongly implies safety, and no destructive hints are given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the tool's essence without any extraneous words. Every word serves a purpose, and the structure is optimal for quick scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no side effects, output schema present), the description is largely complete. It specifies the data returned (tags with counts). However, it does not mention ordering, pagination, or any filtering, which might be helpful but are not critical for a list-all operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, and the description coverage is 100% (no parameters to describe). Baseline is 3, as the description provides no additional parameter meaning beyond what the schema already conveys. This is acceptable given the tool has no configurable inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and the resource ('MCP server tags with counts'). It distinguishes from sibling tools like list_categories and list_languages by specifying 'tags' and including 'with counts', making the tool's unique purpose immediately obvious.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context: use this tool when you need to retrieve all tags along with their occurrence counts. While it doesn't explicitly mention when not to use it or provide alternatives, the tool's name and the sibling set make its role clear without confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_serversAInspect

Search MCP servers in the awesome-mcp.tools catalog. Supports full-text query and filters by category, language, license, and tag.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNofree-text query
tagNotag filter
sortNosort mode: stars|trending|hot (default stars)
limitNomax results, 1-50 (default 20)
licenseNolicense filter
categoryNocategory filter
languageNoprogramming language filter

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsYes
totalYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as rate limits, authentication needs, or side effects. It only describes basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the schema coverage and presence of an output schema, the description is adequate for understanding the tool's function. It could mention that all filters are optional or how they combine, but is still fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides 100% parameter descriptions. The description adds minimal value by summarizing filters, but does not enrich understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches MCP servers in a specific catalog and lists the supported filters (full-text, category, language, license, tag). It distinguishes from sibling listing tools by implying a query-based search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching with filters but does not explicitly state when to use this tool over siblings like list_categories or list_trending. No when-not or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources