Skip to main content
Glama

Server Details

Provides UX capabilities to enhance the design output and understanding of AI systems.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: describing fonts, generating color schemes, providing icons instructions, searching fonts, and searching icons. There is minimal overlap, and descriptions clearly differentiate them.

Naming Consistency4/5

Most tools follow a verb_noun pattern (describe_font, generate_color_scheme, search_fonts, search_icons). However, 'icons_instructions' deviates by using noun_noun, creating a minor inconsistency.

Tool Count5/5

With 5 tools covering fonts, colors, and icons, the count is well-scoped for a focused Material Design asset discovery server. Each tool serves a clear need without redundancy.

Completeness4/5

The set covers core operations for fonts and icons (search and describe) and color scheme generation. Missing are tools for listing all fonts or icons, or for retrieving detailed icon info, but instructions partially compensate.

Available Tools

5 tools
describe_fontB
Read-onlyIdempotent
Inspect

Describes a font family in detail, including its look and feel, supported styles, weights and how to use it.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformYesRequired. The platform in which the font family is going to be used.
fontFamilyYesRequired. The full name of the font family to describe. Example: "Roboto", "Noto Sans".

Output Schema

ParametersJSON Schema
NameRequiredDescription
featuresNoSupported features of the font family such as weight, style and variable axes, if available, in Markdown format.
guidanceNoGuidance on how to effectively use the font family, if available.
errorHelpNoOptional. Contextual help text if the font family name was not found or is invalid.
languagesNoList of supported language and script in BCP47 format.
descriptionNoDescription of the font family, in Markdown format.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds minimal behavioral context beyond this—it mentions the tool provides 'detailed metadata and usage guidance,' which hints at the response format, but doesn't elaborate on rate limits, authentication needs, or error conditions. With annotations covering the core safety profile, the description adds some value but not rich behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured (e.g., by separating metadata from usage guidance). Every part of the sentence contributes to understanding the tool's function, making it appropriately concise for its purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 required parameters), rich annotations (read-only, idempotent, non-destructive), and the presence of an output schema (which means return values are documented elsewhere), the description is reasonably complete. It covers what the tool does and the type of information returned. However, it lacks usage guidelines and deeper behavioral context, which prevents a perfect score despite the supportive structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('fontFamily' and 'platform') fully documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema—it doesn't explain why both parameters are required or provide additional context about their interaction. Given the high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Describes a font family in detail, including its look and feel, supported styles, weights and how to use it.' This specifies the verb ('describes'), resource ('font family'), and scope of information provided. It distinguishes from siblings like 'search_fonts' by focusing on detailed metadata rather than search functionality. However, it doesn't explicitly contrast with other siblings beyond this implicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose 'describe_font' over 'search_fonts' (which likely returns multiple fonts with basic info), nor does it specify prerequisites or contextual constraints. The description simply states what the tool does without addressing usage scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_color_schemeA
Read-onlyIdempotent
Inspect

Generate a Material Design color scheme from one or more key colors. Use this when you need to create a color scheme for an application. The input is one or more named colors in hex format, and the output is a color scheme with a map of color role names to colors in hex format.

ParametersJSON Schema
NameRequiredDescriptionDefault
primaryKeyYesRequired. The primary key color used as the main seed for the scheme. Can be a 6-character hex code (e.g., "#4285F4" or "4285F4"), or any standard CSS color name (e.g., "blue").
tertiaryKeyNoOptional. The tertiary key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name.
secondaryKeyNoOptional. The secondary key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name.
backgroundKeyNoOptional. The neutral key color used to generate the color scheme. If omitted, it will be automatically derived from the other keys. Can be a hex code or any CSS color name.
contrastLevelNoOptional. The contrast level of the color scheme. Values range from -1 (minimum contrast) to 1 (maximum contrast). 0 represents standard contrast (i.e. the design as specified).
optionalThemeNoOptional. Whether to generate a light or dark theme. If unspecified, and a background key is supplied, it will be inferred from that. If not, it will default to light theme.
optionalSchemeVariantNoOptional. If only the primary key color is supplied, this will select which variant of the color scheme to use. If only the primary key color is supplied and this is not set, it defaults to "TONAL_SPOT". If multiple key colors are supplied, this is ignored, and it will default to "BRAND".

Output Schema

ParametersJSON Schema
NameRequiredDescription
colorSchemeNoThe generated color scheme.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly and idempotent behavior. The description adds value by detailing the input/output formats and confirming it's a generative process. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core function. Every sentence adds value, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, output schema exists), the description covers the essential use case and output format. It could mention limitations or error handling, but it is sufficient for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with detailed descriptions for all 7 parameters. The description does not add significant new meaning beyond what the schema already provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: generating a Material Design color scheme from key colors in hex format. It specifies input and output format, and distinguishes itself from sibling tools (fonts/icons).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use the tool ('when you need to create a color scheme for an application'), providing clear context. While it doesn't mention when not to use it, the sibling tools are unrelated, making exclusions unnecessary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

icons_instructionsA
Read-onlyIdempotent
Inspect

Provides essential and critical instructions on how to use Material Icons and Material Symbols efficiently on Web.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
instructionsNoInstructions on how to use Google Material Icons and Google Symbols efficiently.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds minimal behavioral context beyond this—it implies the tool returns instructional content but doesn't detail format, length, or structure. With annotations covering core safety, a baseline score is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Provides essential and critical instructions...'). It avoids redundancy and wastes no words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, annotations covering safety, and an output schema present), the description is reasonably complete. It states what the tool does, and the output schema will handle return values. However, it could better integrate with sibling tools for a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100% (though the schema is empty). The description doesn't need to compensate for missing param info, as there are none to document. It appropriately focuses on the tool's output rather than inputs, aligning with the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Provides essential and critical instructions on how to use Material Icons and Material Symbols efficiently on Web.' It specifies the verb ('provides instructions'), resource ('Material Icons and Material Symbols'), and context ('on Web'). However, it doesn't explicitly differentiate from sibling tools like 'search_icons' or 'describe_font', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_icons' (for finding icons) or 'describe_font' (for font details), nor does it specify prerequisites or exclusions. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_fontsB
Read-onlyIdempotent
Inspect

Finds appropriate fonts matching categories and/or languages.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional. The sort order for the returned font families. Defaults to POPULARITY_DESCENDING if unspecified.
platformYesRequired. The platform in which the font family is going to be used.
languagesNoOptional. Language tags in BCP47 format to filter fonts that support specific scripts (e.g., "en_Latn", "zh_Hans").
categoriesNoOptional. One or more categories to filter font families (e.g., "serif", "sans-serif", "handwriting").

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorHelpNoOptional. Contextual help text or error descriptions if the query failed.
fontFamiliesNoThe names of font families that match the search criteria (e.g., "Roboto", "Open Sans").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior, so the description doesn't need to repeat safety aspects. It adds value by specifying the tool 'finds appropriate fonts matching categories and/or languages', which clarifies the search functionality beyond annotations. However, it lacks details on rate limits, authentication needs, or output format, though the output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste: 'Finds appropriate fonts matching categories and/or languages.' It is front-loaded with the core action and key parameters, making it easy to scan and understand quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 1 required), rich annotations (read-only, idempotent), and the presence of an output schema, the description is reasonably complete. It covers the main action and key inputs, though it could benefit from mentioning the platform requirement or output type. The annotations and schema handle most behavioral and parametric details adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description adds minimal semantics by mentioning 'categories and/or languages', aligning with the schema's 'categories' and 'languages' parameters. It doesn't provide additional syntax or format details beyond the schema, but compensates slightly by framing the purpose around these parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with the verb 'finds' and resource 'fonts', specifying it matches 'categories and/or languages'. It distinguishes from siblings like 'describe_font' (detailed info) or icon/color tools, but doesn't explicitly contrast them. The purpose is specific but could be more precise about scope (e.g., 'Google Fonts').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., platform requirement), exclusions, or compare with sibling tools like 'describe_font' for detailed font info. Usage is implied through the action 'finds', but no explicit context or decision criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_iconsB
Read-onlyIdempotent
Inspect

Finds appropriate Material Design icons matching keywords that describe their usage, style, or shape.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsYesRequired. A list of semantic keywords or metadata tags that describe the desired icon's visual or functional properties. If possible, specify at least three tags to describe usage, style, and shape. Examples: - For a "save" icon: ["save", "diskette", "document", "storage"] - For a "home" icon: ["home", "house", "building"] If multiple tags are provided, the service returns icons that match any part of the tag list, ordered by relevance (number of matching tags). If no tags are provided, all icons are returned.
iconSetNoOptional. The icon set to search within (e.g., "Material Symbols", "Material Icons"). If omitted, the default icon set of the environment is used.

Output Schema

ParametersJSON Schema
NameRequiredDescription
iconsNoThe names of icons that match the provided tags, ordered by relevance.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds minimal behavioral context by specifying the search is based on 'keywords that describe their usage, style, or shape,' but doesn't elaborate on aspects like rate limits, authentication needs, or result ordering. It doesn't contradict annotations, so it earns a baseline score for adding some value beyond the structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, and every part of the sentence contributes meaningfully. There's no redundancy or wasted space, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a search function with 2 parameters), rich annotations (covering read-only, idempotent, and non-destructive traits), and the presence of an output schema (which handles return values), the description is reasonably complete. It specifies the resource type and matching criteria, but lacks usage guidelines and deeper behavioral context, which slightly reduces completeness. However, it's adequate for a tool with good structured support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the input schema fully documents both parameters ('tags' and 'iconSet') with detailed descriptions. The description doesn't add any parameter-specific information beyond what's in the schema, such as syntax or format details. According to the rules, with high schema coverage, the baseline score is 3, as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Finds appropriate Material Design icons matching keywords that describe their usage, style, or shape.' It specifies the verb ('Finds'), resource ('Material Design icons'), and matching criteria ('keywords that describe their usage, style, or shape'). However, it doesn't explicitly differentiate from sibling tools like 'search_fonts' beyond the resource type, which is why it doesn't reach a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_fonts' or 'icons_instructions', nor does it specify any prerequisites, exclusions, or contextual triggers for usage. The only implied usage is based on the purpose statement, which is insufficient for clear decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources