mcp
Server Details
Provides UX capabilities to enhance the design output and understanding of AI systems.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 7 of 7 tools scored.
Most tools have distinct purposes, but there is notable overlap between extract_and_generate_brand_color_scheme, extract_brand_colors_from_image, and generate_brand_color_scheme, which all handle brand colors from images or lists, potentially causing confusion. The other tools (describe_font, search_fonts, icons_instructions, search_icons) are clearly separated by domain.
The naming follows a consistent verb_noun pattern with snake_case throughout, such as describe_font and search_icons. However, there is a minor inconsistency with icons_instructions, which uses a noun_noun format instead of a verb-based action, slightly deviating from the otherwise predictable pattern.
With 7 tools, the count is well-scoped for a design-focused server covering fonts, icons, and color schemes. Each tool appears to serve a specific function without being excessive or too sparse, fitting the domain appropriately.
The server covers key design areas: fonts (describe and search), icons (instructions and search), and color schemes (extraction and generation). Minor gaps exist, such as no tools for modifying or applying these design elements in contexts like layouts or templates, but core retrieval and generation operations are well-covered.
Available Tools
7 toolsdescribe_fontBRead-onlyIdempotentInspect
Describes a font family in detail, including its look and feel, supported styles, weights and how to use it.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Required. The platform in which the font family is going to be used. | |
| fontFamily | Yes | Required. The full name of the font family to describe. Example: "Roboto", "Noto Sans". |
Output Schema
| Name | Required | Description |
|---|---|---|
| features | No | Supported features of the font family such as weight, style and variable axes, if available, in Markdown format. |
| guidance | No | Guidance on how to effectively use the font family, if available. |
| errorHelp | No | Optional. Contextual help text if the font family name was not found or is invalid. |
| languages | No | List of supported language and script in BCP47 format. |
| description | No | Description of the font family, in Markdown format. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds minimal behavioral context beyond this—it mentions the tool provides 'detailed metadata and usage guidance,' which hints at the response format, but doesn't elaborate on rate limits, authentication needs, or error conditions. With annotations covering the core safety profile, the description adds some value but not rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured (e.g., by separating metadata from usage guidance). Every part of the sentence contributes to understanding the tool's function, making it appropriately concise for its purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 required parameters), rich annotations (read-only, idempotent, non-destructive), and the presence of an output schema (which means return values are documented elsewhere), the description is reasonably complete. It covers what the tool does and the type of information returned. However, it lacks usage guidelines and deeper behavioral context, which prevents a perfect score despite the supportive structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('fontFamily' and 'platform') fully documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema—it doesn't explain why both parameters are required or provide additional context about their interaction. Given the high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Describes a font family in detail, including its look and feel, supported styles, weights and how to use it.' This specifies the verb ('describes'), resource ('font family'), and scope of information provided. It distinguishes from siblings like 'search_fonts' by focusing on detailed metadata rather than search functionality. However, it doesn't explicitly contrast with other siblings beyond this implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'describe_font' over 'search_fonts' (which likely returns multiple fonts with basic info), nor does it specify prerequisites or contextual constraints. The description simply states what the tool does without addressing usage scenarios or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_and_generate_brand_color_schemeARead-onlyIdempotentInspect
Extracts the background and accent colors from an image, and generates a brand color scheme from them. The input is an image encoded as base64, and the output is a unified color scheme with colors in hex format (e.g., {"primary": "#041E49", "secondary": "#68748B"}).
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Required. The image to extract key colors from. |
Output Schema
| Name | Required | Description |
|---|---|---|
| colorScheme | No | Generated GenUX color scheme. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, non-mutating operation. The description adds value by specifying the output format ('unified color scheme with colors in hex format') and example, which goes beyond annotations. However, it does not mention potential limitations like image size constraints or processing time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by input and output details. Every sentence adds essential information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (image processing and color generation), the description is complete: it states the purpose, input format, and output format. With annotations covering safety and idempotency, and an output schema likely detailing the color scheme structure, no critical gaps remain for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'image' with its required sub-fields. The description adds minimal semantics by mentioning 'image encoded as base64', which is already covered in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action: 'Extracts the background and accent colors from an image, and generates a brand color scheme from them.' It distinguishes from sibling tools like 'extract_brand_colors_from_image' (which only extracts) and 'generate_brand_color_scheme' (which likely generates without extraction), making the combined functionality explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when an image is available and a unified color scheme is needed, but it does not explicitly state when to use this tool versus alternatives like 'extract_brand_colors_from_image' or 'generate_brand_color_scheme'. No exclusions or prerequisites are mentioned, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_brand_colors_from_imageARead-onlyIdempotentInspect
Extracts the background and accent colors from an image. Use this to create a color palette that matches a given image. The input is an image encoded as base64, and the output is a list of brand colors in hex format (e.g., [#041E49, #68748B, #E1E2E8]).
| Name | Required | Description | Default |
|---|---|---|---|
| image | Yes | Required. The image to use for extracting brand colors. |
Output Schema
| Name | Required | Description |
|---|---|---|
| primaryKey | No | The primary key color extracted. Returned in 6-character hex format (e.g., "#00FF00"). If no primary key color can be extracted, this will be empty. |
| tertiaryKey | No | The tertiary key color extracted. Returned in 6-character hex format (e.g., "#00FF00"). |
| secondaryKey | No | The secondary key color extracted. Returned in 6-character hex format (e.g., "#00FF00"). |
| backgroundKey | No | The background key color extracted. Returned in 6-character hex format (e.g., "#00FF00"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows it's safe and repeatable. The description adds context about the output format (list of hex colors) and input encoding (base64), which is useful but does not disclose additional behavioral traits like rate limits, error handling, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidance and input/output details in two efficient sentences. Every sentence contributes essential information without redundancy, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, high schema coverage, and presence of an output schema, the description is mostly complete. It covers purpose, usage, and key input/output details. However, it lacks information on potential limitations (e.g., image size constraints) or error cases, which could enhance completeness for an extraction tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the input schema fully documenting the 'image' parameter and its nested properties. The description adds minimal value by mentioning 'input is an image encoded as base64', which is already covered in the schema. No additional parameter semantics are provided beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Extracts the background and accent colors from an image') and the resource ('an image'), distinguishing it from siblings like 'generate_brand_color_scheme' which creates schemes rather than extracting from images. It provides a concrete example of output format, enhancing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a clear usage context ('Use this to create a color palette that matches a given image'), which guides when to apply the tool. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'extract_and_generate_brand_color_scheme', which might offer more functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_brand_color_schemeBRead-onlyIdempotentInspect
Generates a brand color scheme from a list of brand colors. The input is several named colors in hex format (e.g., {"neutralKey": "#FF0000", "primaryKey": "#00FF00", "secondaryKey": "#68748B", "tertiaryKey": "#588493"]), and the output is a unified color scheme with colors in hex format (e.g., {"primary": "#041E49", "secondary": "#68748B"}).
| Name | Required | Description | Default |
|---|---|---|---|
| primaryKey | Yes | Required. The primary key color used as the main seed for the scheme. Can be a 6-character hex code (e.g., "#4285F4" or "4285F4"), or any standard CSS color name (e.g., "blue"). | |
| tertiaryKey | No | Optional. The tertiary key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name. | |
| secondaryKey | No | Optional. The secondary key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name. | |
| backgroundKey | No | Optional. The neutral key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name. |
Output Schema
| Name | Required | Description |
|---|---|---|
| colorScheme | No | The generated brand color scheme. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this: it mentions that omitted colors are 'automatically derived' (implied from input schema) and gives example input/output formats. However, it doesn't disclose additional traits like rate limits, auth needs, or specific generation algorithms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized (two sentences) and front-loaded with the core purpose. The first sentence states what the tool does, and the second provides input/output examples. There's minimal waste, though the example could be slightly streamlined (e.g., by removing redundant hex format mentions).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (generative, 4 parameters), rich annotations (safety hints), and the presence of an output schema, the description is reasonably complete. It covers the purpose, input format, and output format. However, it lacks context on when to use versus siblings, which slightly reduces completeness for an agent selecting among multiple color-related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for all parameters (e.g., 'primaryKey' as 'main seed'). The description adds little beyond this: it lists parameter names in an example but doesn't explain their roles or interactions further. With high schema coverage, the baseline is 3, as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generates a brand color scheme from a list of brand colors.' It specifies the verb ('generates'), resource ('brand color scheme'), and source ('list of brand colors'). However, it doesn't explicitly differentiate from sibling tools like 'extract_and_generate_brand_color_scheme' or 'extract_brand_colors_from_image' beyond mentioning the input format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred over others (e.g., 'extract_and_generate_brand_color_scheme' for images). Usage is implied by the input format but lacks explicit when/when-not instructions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icons_instructionsARead-onlyIdempotentInspect
Provides essential and critical instructions on how to use Material Icons and Material Symbols efficiently on Web.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| instructions | No | Instructions on how to use Google Material Icons and Google Symbols efficiently. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds minimal behavioral context beyond this—it implies the tool returns instructional content but doesn't detail format, length, or structure. With annotations covering core safety, a baseline score is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Provides essential and critical instructions...'). It avoids redundancy and wastes no words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, annotations covering safety, and an output schema present), the description is reasonably complete. It states what the tool does, and the output schema will handle return values. However, it could better integrate with sibling tools for a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (though the schema is empty). The description doesn't need to compensate for missing param info, as there are none to document. It appropriately focuses on the tool's output rather than inputs, aligning with the lack of parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provides essential and critical instructions on how to use Material Icons and Material Symbols efficiently on Web.' It specifies the verb ('provides instructions'), resource ('Material Icons and Material Symbols'), and context ('on Web'). However, it doesn't explicitly differentiate from sibling tools like 'search_icons' or 'describe_font', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_icons' (for finding icons) or 'describe_font' (for font details), nor does it specify prerequisites or exclusions. The agent must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_fontsBRead-onlyIdempotentInspect
Finds appropriate fonts matching categories and/or languages.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Required. The platform in which the font family is going to be used. | |
| languages | No | Optional. Language tags in BCP47 format to filter fonts that support specific scripts (e.g., "en_Latn", "zh_Hans"). | |
| categories | No | Optional. One or more categories to filter font families (e.g., "serif", "sans-serif", "handwriting"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| errorHelp | No | Optional. Contextual help text or error descriptions if the query failed. |
| fontFamilies | No | The names of font families that match the search criteria (e.g., "Roboto", "Open Sans"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior, so the description doesn't need to repeat safety aspects. It adds value by specifying the tool 'finds appropriate fonts matching categories and/or languages', which clarifies the search functionality beyond annotations. However, it lacks details on rate limits, authentication needs, or output format, though the output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste: 'Finds appropriate fonts matching categories and/or languages.' It is front-loaded with the core action and key parameters, making it easy to scan and understand quickly without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, 1 required), rich annotations (read-only, idempotent), and the presence of an output schema, the description is reasonably complete. It covers the main action and key inputs, though it could benefit from mentioning the platform requirement or output type. The annotations and schema handle most behavioral and parametric details adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description adds minimal semantics by mentioning 'categories and/or languages', aligning with the schema's 'categories' and 'languages' parameters. It doesn't provide additional syntax or format details beyond the schema, but compensates slightly by framing the purpose around these parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with the verb 'finds' and resource 'fonts', specifying it matches 'categories and/or languages'. It distinguishes from siblings like 'describe_font' (detailed info) or icon/color tools, but doesn't explicitly contrast them. The purpose is specific but could be more precise about scope (e.g., 'Google Fonts').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., platform requirement), exclusions, or compare with sibling tools like 'describe_font' for detailed font info. Usage is implied through the action 'finds', but no explicit context or decision criteria are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_iconsBRead-onlyIdempotentInspect
Finds appropriate Material Design icons matching keywords that describe their usage, style, or shape.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | Yes | Required. A list of semantic keywords or metadata tags that describe the desired icon's visual or functional properties. If possible, specify at least three tags to describe usage, style, and shape. Examples: - For a "save" icon: ["save", "diskette", "document", "storage"] - For a "home" icon: ["home", "house", "building"] If multiple tags are provided, the service returns icons that match any part of the tag list, ordered by relevance (number of matching tags). If no tags are provided, all icons are returned. | |
| iconSet | No | Optional. The icon set to search within (e.g., "Material Symbols", "Material Icons"). If omitted, the default icon set of the environment is used. |
Output Schema
| Name | Required | Description |
|---|---|---|
| icons | No | The names of icons that match the provided tags, ordered by relevance. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds minimal behavioral context by specifying the search is based on 'keywords that describe their usage, style, or shape,' but doesn't elaborate on aspects like rate limits, authentication needs, or result ordering. It doesn't contradict annotations, so it earns a baseline score for adding some value beyond the structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, and every part of the sentence contributes meaningfully. There's no redundancy or wasted space, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a search function with 2 parameters), rich annotations (covering read-only, idempotent, and non-destructive traits), and the presence of an output schema (which handles return values), the description is reasonably complete. It specifies the resource type and matching criteria, but lacks usage guidelines and deeper behavioral context, which slightly reduces completeness. However, it's adequate for a tool with good structured support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the input schema fully documents both parameters ('tags' and 'iconSet') with detailed descriptions. The description doesn't add any parameter-specific information beyond what's in the schema, such as syntax or format details. According to the rules, with high schema coverage, the baseline score is 3, as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Finds appropriate Material Design icons matching keywords that describe their usage, style, or shape.' It specifies the verb ('Finds'), resource ('Material Design icons'), and matching criteria ('keywords that describe their usage, style, or shape'). However, it doesn't explicitly differentiate from sibling tools like 'search_fonts' beyond the resource type, which is why it doesn't reach a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_fonts' or 'icons_instructions', nor does it specify any prerequisites, exclusions, or contextual triggers for usage. The only implied usage is based on the purpose statement, which is insufficient for clear decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!