awesome-mcp.tools
Server Details
Hosted MCP server exposing a catalog of 2,000+ MCP servers as searchable tools — search, compare, top, trending, hot. Refreshed every 6h from the open-source ecosystem. Source: github.com/adw0rd/awesome-mcp-tools-mcp
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 8 of 8 tools scored. Lowest: 2.7/5.
Each tool targets a distinct operation on the catalog: listing categories, languages, tags, hot, trending; searching; getting details; comparing. No overlap in functionality.
All tool names follow the verb_noun pattern consistently (list_*, get_*, search_*, compare_*), using snake_case throughout.
8 tools is a well-scoped size for a server directory, covering browsing, search, and detail retrieval without being overwhelming.
The tool set covers listing, searching, and comparing servers. Missing a direct 'list all servers' endpoint, but search with empty query likely fills that gap.
Available Tools
8 toolscompare_serversCInspect
Compare two MCP servers side-by-side.
| Name | Required | Description | Default |
|---|---|---|---|
| slugA | Yes | first server slug (required) | |
| slugB | Yes | second server slug (required) |
Output Schema
| Name | Required | Description |
|---|---|---|
| a | Yes | |
| b | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations and no additional behavioral detail, the description does not disclose side effects, permissions, or output format. 'Compare' implies a read operation but is not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise with one sentence, no fluff. However, slightly more detail could improve clarity without significant bloat.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present, the description is adequate but does not enrich understanding of the comparison output. For a tool with no annotations, more behavioral context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with descriptions ('first server slug', 'second server slug'), and schema description coverage is 100%. The description adds no extra meaning beyond what is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares two MCP servers side-by-side, distinguishing it from get_server (single server) and search_servers (searching). However, it does not specify what aspects are compared.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings like get_server or search_servers. The description does not provide context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_serverAInspect
Get full details (metadata + README) of a single MCP server by its slug.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | server slug (required) |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | Yes | |
| name | Yes | |
| slug | Yes | |
| tags | No | |
| stars | Yes | |
| readme | No | |
| license | No | |
| website | No | |
| category | No | |
| language | No | |
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states it returns 'full details (metadata + README)', indicating it is a read operation. However, with no annotations, it lacks details on potential errors, permissions, or side effects. It does not contradict any annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence with no unnecessary words. It is front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter retrieval tool with an output schema, the description is adequate. It specifies what is returned (metadata + README) and how to identify the server (by slug). Missing behavioral context like error handling, but acceptable given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with one parameter ('slug') clearly described. The description adds no extra meaning beyond 'by its slug', so it meets the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('full details (metadata + README) of a single MCP server'), and the method ('by its slug'). It effectively distinguishes from sibling tools like search_servers (list multiple) and compare_servers (compare).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need full details of a specific server by slug, but it does not explicitly state when to use this tool versus siblings like search_servers or list_hot, nor does it mention any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesAInspect
List all MCP server categories with server counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| categories | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It correctly implies a read-only operation and discloses the output includes server counts. However, it does not mention any potential behavioral traits like authentication or rate limits, though for a simple list tool this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 7-word sentence that is front-loaded and contains no unnecessary words. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, no annotations, and an output schema exists, the description sufficiently covers the tool's purpose and output. It does not need to explain return values due to the output schema, and the context of siblings is handled by the clear resource specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is 100%. The description adds the 'with server counts' detail, providing extra context beyond the schema. Baseline for zero parameters is 4, and no improvement needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all MCP server categories and includes server counts. It uses a specific verb-resource combination ('list categories') and distinguishes from sibling tools (list_languages, list_tags, etc.) by specifying the resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like list_tags or list_languages. The usage is implied by the tool's name and description, but no explicit when-not or alternative references are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_hotCInspect
Featured/hot MCP servers.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | max results, 1-50 (default 20) |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes | |
| total | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should disclose behavioral traits. It does not mention read-only nature, side effects, auth needs, or rate limits. The brief description gives no insight into behavior beyond listing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (a phrase), which is concise but not wasteful. It could be more structured or informative, but it avoids unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with an output schema, the description is minimally complete. However, it lacks context about what 'hot' means (e.g., algorithm or curation) and does not explain how results are sorted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the limit parameter has a description). The tool description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Featured/hot MCP servers.' indicates the tool lists servers, but 'hot' is ambiguous and does not differentiate from siblings like list_trending or list_categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as list_trending or search_servers. No prerequisites or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_languagesAInspect
List programming languages of MCP servers with counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| languages | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It indicates a read operation but does not disclose behavioral traits like authentication needs, pagination, or sorting order. It is adequate but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one very short sentence with no wasted words, front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and an output schema exists, the description sufficiently explains the purpose. It could mention if results are limited or sorted, but for a simple list tool it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters (schema coverage 100% trivially), so the baseline is 4. The description adds value by explaining the output includes counts, which goes beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists programming languages with counts, using a specific verb 'list' and resource 'languages'. It naturally distinguishes from siblings like 'list_categories' which list categories, not languages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when one needs an overview of programming languages used by MCP servers, but it does not explicitly state when to use it instead of alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tagsAInspect
List all MCP server tags with counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| tags | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden of behavioral disclosure. It clearly indicates a read-only operation ('List all') with no side effects. Though it could explicitly state 'read-only' or 'non-destructive', the listing verb strongly implies safety, and no destructive hints are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the tool's essence without any extraneous words. Every word serves a purpose, and the structure is optimal for quick scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no side effects, output schema present), the description is largely complete. It specifies the data returned (tags with counts). However, it does not mention ordering, pagination, or any filtering, which might be helpful but are not critical for a list-all operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and the description coverage is 100% (no parameters to describe). Baseline is 3, as the description provides no additional parameter meaning beyond what the schema already conveys. This is acceptable given the tool has no configurable inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and the resource ('MCP server tags with counts'). It distinguishes from sibling tools like list_categories and list_languages by specifying 'tags' and including 'with counts', making the tool's unique purpose immediately obvious.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: use this tool when you need to retrieve all tags along with their occurrence counts. While it doesn't explicitly mention when not to use it or provide alternatives, the tool's name and the sibling set make its role clear without confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_trendingAInspect
Top MCP servers by 24h star growth.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | max results, 1-50 (default 20) |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes | |
| total | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the sorting metric (24h star growth) but omits details like sorting order, pagination, or rate limits. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that conveys the core purpose. No wasted words, but it could benefit from additional context without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has an output schema (not shown) and is simple, but the description lacks explanation of what 'Top MCP servers' entails or how it differs from list_hot. Sufficient for basic understanding but not fully complete given sibling context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the only parameter (limit). The description adds context about time window but not specific parameter meaning beyond schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns top MCP servers by 24-hour star growth, which is specific and distinguishes it from siblings like list_hot (likely based on other metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for trending servers but does not explicitly state when to use versus alternatives like list_hot or search_servers. No when-not conditions provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_serversAInspect
Search MCP servers in the awesome-mcp.tools catalog. Supports full-text query and filters by category, language, license, and tag.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | free-text query | |
| tag | No | tag filter | |
| sort | No | sort mode: stars|trending|hot (default stars) | |
| limit | No | max results, 1-50 (default 20) | |
| license | No | license filter | |
| category | No | category filter | |
| language | No | programming language filter |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes | |
| total | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as rate limits, authentication needs, or side effects. It only describes basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema coverage and presence of an output schema, the description is adequate for understanding the tool's function. It could mention that all filters are optional or how they combine, but is still fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides 100% parameter descriptions. The description adds minimal value by summarizing filters, but does not enrich understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches MCP servers in a specific catalog and lists the supported filters (full-text, category, language, license, tag). It distinguishes from sibling listing tools by implying a query-based search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching with filters but does not explicitly state when to use this tool over siblings like list_categories or list_trending. No when-not or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!