font-metadata
Server Details
Cloudflare Workers MCP server: font-metadata
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- lazymac2x/font-metadata-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose: language support checking, CSS fallback generation, @import snippet generation, metadata retrieval, font listing, and pairing suggestions. No overlap in functionality.
All tool names follow a consistent verb_noun pattern using snake_case, e.g., check_language_support, generate_css, list_fonts_by_category. No deviations.
With 6 tools covering essential font metadata operations, the count is appropriate for the server's scope. It is neither too sparse nor overwhelming.
The tool set covers the full lifecycle of font metadata interaction: browsing (list_fonts_by_category), inspection (get_font_info), language support verification, CSS output (font_stack, generate_css), and expert pairings. No obvious gaps.
Available Tools
6 toolscheck_language_supportBInspect
Check which languages/scripts a font supports, or verify support for specific language codes.
| Name | Required | Description | Default |
|---|---|---|---|
| font_id | Yes | Font slug or family name | |
| languages | No | Language codes to check (e.g. ["ru","ko","ar"]). If omitted, returns all supported languages. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. However, it only describes the function at a high level without disclosing behavioral traits such as caching, failure modes, or whether it makes external API calls.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence that conveys the core functionality without wasted words. However, it lacks structural elements like bullet points or explicit separation of usage modes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (2 params, 100% coverage) and no output schema, the description adequately explains the purpose and basic usage. However, it does not describe the return format, which could be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add meaningful information beyond what is already in the parameter descriptions (e.g., 'If omitted, returns all supported languages' repeats the schema text).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('check') and resource ('languages/scripts a font supports'), with two distinct modes: getting all supported languages or verifying specific codes. This distinguishes it from sibling tools like get_font_info or list_fonts_by_category.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus sibling tools, nor does it mention prerequisites or limitations. The intended use case is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
font_stackAInspect
Generate a complete CSS font-stack with metric-compatible and system fallbacks for robust cross-platform rendering.
| Name | Required | Description | Default |
|---|---|---|---|
| font_id | Yes | Font slug or family name (e.g. "inter", "roboto-mono") | |
| include_system_fallbacks | No | Include system fallback fonts (default true) | |
| include_metric_compatible | No | Include metric-compatible fallback fonts (default true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the burden. It discloses the tool generates a font stack with specific fallbacks, but does not mention side effects, idempotency, or error handling for invalid font_ids. This is adequate for a simple generation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, well-structured sentence that conveys the purpose and key features. No redundant information; every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple generation tool with well-documented parameters and no output schema, the description is largely sufficient. It covers the main components of the output. Minor lack of specifics about return format, but still complete enough for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds context about the output (complete CSS font-stack) but does not elaborate on parameter semantics beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a CSS font-stack with metric-compatible and system fallbacks for cross-platform rendering. It distinguishes from siblings like list_fonts_by_category or generate_css which are more general.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like generate_css or suggest_pairings. The description implies use for font stack generation but does not provide when-not-to-use or mention other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_cssAInspect
Generate ready-to-use CSS @import snippet for a single font or a heading+body font pairing.
| Name | Required | Description | Default |
|---|---|---|---|
| font | No | Font for single mode | |
| mode | Yes | single: one font CSS; pairing: heading+body CSS | |
| size | No | Base font size (e.g. "16px") | |
| weights | No | Font weights to include (e.g. ["400","700"]) | |
| body_font | No | Body font for pairing mode | |
| line_height | No | Line height (e.g. "1.5") | |
| heading_font | No | Heading font for pairing mode |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description only mentions the output type but does not disclose potential side effects, error handling, prerequisites (e.g., valid Google Fonts), or performance implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no redundant information. It is front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool that generates CSS snippets, the description adequately conveys the output type and scope. However, without an output schema, it could mention return format details or constraints on font names.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds context about the tool's overall purpose (single or pairing), which complements the schema's 100% parameter descriptions. It helps understand how parameters like mode, heading_font, body_font relate to the output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates CSS @import snippets for fonts, specifying single or pairing mode. It is distinct from siblings which handle font info, language support, or suggestions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage (generate CSS for fonts) but does not provide guidance on when to use this tool versus alternatives like font_stack or suggest_pairings, nor when to choose single vs pairing mode.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_font_infoAInspect
Get detailed metadata for a specific web font including variants, subsets, tags, designer, year, and Google Fonts URL.
| Name | Required | Description | Default |
|---|---|---|---|
| font_id | Yes | Font slug or family name (e.g. "inter", "Open Sans", "playfair-display") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It implies a read-only operation ('get'), but does not explicitly state safety, idempotency, or any behavioral details beyond the basic query. Lacks disclosure of potential side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, directly states purpose and key contents. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description enumerates expected return fields (variants, subsets, tags, designer, year, URL), providing sufficient context for the agent to understand the output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter with clear description and examples). The description does not add further meaning to the parameter beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Get' and resource 'detailed metadata for a specific web font', and lists specific contents (variants, subsets, tags, designer, year, URL). Distinguishes from sibling tools like list_fonts_by_category or suggest_pairings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when/not to use or alternatives mentioned. However, the purpose is clear and distinct from siblings; it is implied you use this when you need metadata for a single font.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_fonts_by_categoryBInspect
List available web fonts filtered by category, tag, or search query. Returns 100+ fonts across serif, sans-serif, monospace, display, and handwriting categories.
| Name | Required | Description | Default |
|---|---|---|---|
| tag | No | Tag filter (e.g. "geometric", "elegant", "code", "rounded", "literary") | |
| sort | No | Sort order (default: popularity) | |
| limit | No | Max results (1-50, default 20) | |
| offset | No | Pagination offset (default 0) | |
| search | No | Search term to filter fonts by name or description | |
| category | No | Font category filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must disclose behavioral traits. It does not mention that the operation is read-only, nor does it discuss pagination behavior, rate limits, or authentication requirements. The description only explains filtering and result count.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long (20 words), instantly conveying the core function. It is front-loaded with the main action and includes key details without any redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters and no output schema, the description provides a high-level overview but lacks details on output format, pagination behavior, and edge cases. It is adequate though not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions, so baseline is 3. The description adds slight value by summarizing filtering options and result size, but does not add new details for individual parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists web fonts filtered by category, tag, or search query, and mentions the specific categories. The tool name and description together differentiate it from siblings like 'get_font_info' which retrieves details for a single font.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidelines on when to use this tool versus alternatives. The description only states what it does, not when it is appropriate or when other tools might be better.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_pairingsAInspect
Get font pairing recommendations with CSS snippets, compatibility scores, and curated rationale from design experts.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of pairing suggestions (1-10, default 5) | |
| style | No | Filter by style: modern, elegant, bold, minimal, classic, etc. | |
| font_id | Yes | Font slug or family name to find pairings for | |
| use_case | No | Filter by use case: editorial, blog, saas, tech, luxury, corporate, etc. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses output content (CSS, scores, rationale) but does not mention side effects, rate limits, or behavior when no pairings are found. Adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that front-loads the tool's purpose and output. No wasted words, clearly structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 4 well-described parameters and no output schema, the description gives a good sense of what to expect (CSS, scores, rationale). Lacks detail on return format or error cases, but sufficient for most use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. The description adds no extra meaning beyond the schema; baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides font pairing recommendations with specific deliverables (CSS snippets, scores, rationale). It distinguishes itself from siblings like list_fonts_by_category or get_font_info which focus on listing fonts or retrieving single font info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like font_stack or generate_css. The description only states what it does, not when it's appropriate or not.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!