Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Most tools have distinct purposes (banner for text-to-ASCII, convert for image-to-ASCII, kaomoji for emoticons), though search overlaps slightly with kaomoji since both can retrieve kaomoji via different mechanisms. The delete/submit/get trio clearly forms a CRUD subset for ASCII art specifically.

    Naming Consistency2/5

    Highly inconsistent pattern mixing arbitrary nouns (banner, categories, kaomoji, random) with verbs (convert, delete, get, list, search, submit). No predictable verb_noun or resource_action structure makes it difficult to guess tool purposes from names alone.

    Tool Count4/5

    Ten tools appropriately cover the domain scope: generation (banner, convert), retrieval (get, random, list, search), management (submit, delete), and metadata (categories, kaomoji). Slightly dense but justifiable for the functionality offered.

    Completeness3/5

    Core operations exist for ASCII art (create, read, delete) but lacks update/edit for user submissions. The kaomoji functionality appears incomplete—it can be retrieved and searched but not submitted or deleted, and list/categories tools don't clarify if they include kaomoji or only ASCII art.

  • Average 3.3/5 across 10 of 10 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.3.4

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 10 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description fails to disclose critical behavioral traits: the optional 'save' parameter persists data to a store (side effect), the output format/type is unspecified, and there's no mention of mutual exclusivity between URL and base64 inputs or error handling for invalid images.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single-sentence description is efficiently structured with the action verb front-loaded. However, its extreme brevity contributes to the lack of behavioral transparency and contextual completeness, suggesting it is overly concise for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 7 parameters including a nested save object, no output schema, and zero annotations, the description is incomplete. It omits the return value format, fails to explain the persistence behavior of the save parameter, and provides no guidance on parameter interdependencies.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description mentions 'size tier' and 'URL or base64' which align with the schema but add minimal semantic depth beyond what the schema property descriptions already provide.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool converts images to ASCII art and identifies the input sources (URL or base64) and the size tier concept. However, it does not explicitly distinguish this tool from siblings like 'banner' or 'kaomoji' which may also generate text art.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it explain prerequisites (e.g., that either URL or base64 must be provided despite neither being marked required) or when to use the save feature.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. While 'List' implies a read-only operation, the description does not confirm safety, mention pagination, describe the return structure, or explain what these categories represent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at only three words. It is front-loaded with the essential action and resource. However, given the lack of annotations and presence of siblings, it borders on under-specification rather than optimal conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a parameterless tool, the description meets minimum viability by stating the core function. However, gaps remain regarding the nature of the categories (are they user-created, system-defined?), their relationship to other entities, and the expected output format.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool accepts zero parameters, which per the evaluation rubric establishes a baseline score of 4. The empty input schema requires no additional semantic clarification from the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('List') and identifies the resource ('categories'), clearly indicating it retrieves a collection. However, it fails to differentiate from the sibling 'list' tool, leaving ambiguity about when to use this specific endpoint versus the general list tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided regarding when to use this tool versus alternatives. Given the presence of a sibling 'list' tool and a 'search' tool, the description should clarify the specific scope of these categories.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. While 'Get' implies a read operation, the description omits error handling (what happens if ID is invalid?), caching behavior, or whether the art is returned as a string or object.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient four-word description. Front-loaded with verb, zero redundancy, and appropriately sized for a single-parameter retrieval operation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple retrieval tool with no output schema, but misses opportunity to describe the return value (the ASCII art string) or error conditions. Meets minimum viability but lacks richness expected when no annotations guide behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage with examples ('cat', 'sun', 'heart'). Description acknowledges the ID parameter ('by ID') but adds no semantic context beyond what the schema already provides. Baseline 3 appropriate given comprehensive schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb (Get) + resource (ASCII art) + scope (by ID). However, lacks explicit differentiation from siblings like 'search' or 'list' that could also retrieve art.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this vs. alternatives like 'search', 'list', or 'random'. Does not mention prerequisites such as needing to know the specific ID beforehand.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to specify whether the tool returns a single kaomoji or a list, what happens when no match is found, or whether the operation is idempotent. The phrase 'Get' implies a read-only operation but lacks explicit safety guarantees.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. The first sentence front-loads the core functionality (getting kaomoji by emotion/keyword), while the second provides concise contextual value (inline text expressions). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the simple input schema (two optional strings) and lack of output schema, the description adequately covers the tool's purpose but leaves a gap regarding return value structure. For a tool with no annotations and no output schema, it should ideally specify what the tool returns (e.g., a string, an object with the kaomoji, etc.).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already fully documents both 'query' and 'category' parameters. The description mentions 'emotion or keyword' which aligns with the query parameter, but adds no additional semantic value regarding input formats, valid values, or parameter interaction beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Get[s] a kaomoji (Japanese text emoticon)' with specific verb and resource. It distinguishes the specific domain (kaomoji) from generic siblings like 'get' or 'search', though it doesn't explicitly differentiate from the 'random' sibling which may overlap when parameters are omitted.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides a use-case hint ('Perfect for inline text expressions') but offers no explicit guidance on when to use this versus siblings like 'random' or 'categories', nor does it clarify when to use 'query' versus 'category' parameters or what happens if both are omitted.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to describe the return format, pagination behavior, case sensitivity of search, or what happens when no matches are found. It only states the basic search functionality.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of exactly two efficient sentences with zero redundancy. It is appropriately front-loaded with the core action and earns its brevity given the simple, well-documented schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (two simple parameters) and high schema coverage, the description is minimally adequate. However, it lacks any description of the output format or return structure, which is a notable gap since no output schema is provided.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% input schema description coverage, the schema already fully documents both parameters (including that query matches id/name/category/tags). The description adds minimal semantic value beyond the schema, merely confirming the keyword search and filter usage, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (search), resource (ASCII art and kaomoji), and method (by keyword). It implicitly covers the scope of both content types, distinguishing it from the sibling 'kaomoji' tool, though it does not explicitly differentiate from 'get' or 'list' siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to select this tool versus siblings like 'get' (retrieve by ID) or 'list' (browse all). It only offers a basic hint about using the 'type' parameter for filtering, which is parametric usage rather than tool selection guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. While 'submit' implies a write operation, the description does not confirm whether this creates a persistent record, what validation occurs, or whether the operation is idempotent or destructive. The notation '16w' is also unexplained (width?), creating potential confusion.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise with only two sentences and no filler. The core action is front-loaded in the first sentence. Minor deduction because the '16w' abbreviation in the second sentence assumes contextual knowledge without explanation, potentially requiring the agent to infer that 'w' means width and maps to the 'size' enum values.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage and 6 parameters, the description provides adequate context for the size parameter's implications. However, for a mutation tool with no output schema and no annotations indicating destructive/write behavior, the description should explicitly confirm the creation/submission side effects rather than leaving them implied by the verb alone.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context beyond the schema by specifying the dimensional mapping of size tiers (16x8, 32x16, 64x32), which helps the agent understand the 'art' parameter constraints. However, it uses '16w' terminology while the schema expects '16', creating slight friction.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (Submit) and resource (ASCII art), making the core function unambiguous. However, it does not explicitly differentiate from siblings like 'banner' (which might generate art) or 'convert' (which might transform existing art), leaving some ambiguity about when to prefer this over creation alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'convert' or 'banner', nor are prerequisites (e.g., valid ASCII characters only) or exclusion criteria mentioned. The agent must infer usage solely from the action verb.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Get' implying a read operation, but fails to disclose whether the randomness is deterministic, if results are cached, the expected return format (plain text string? JSON object?), or any rate limiting. This leaves significant behavioral gaps.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no redundant words. It immediately communicates the core function without preamble or unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description minimally identifies the return content type ('ASCII art') but lacks details on the data structure (string vs object), encoding, or whether metadata accompanies the art. For a zero-parameter tool, this is adequate but incomplete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately does not invent parameters that don't exist in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Get') and resource ('random ASCII art'), providing specific intent. However, it does not explicitly differentiate from sibling tools like 'get' (which likely retrieves specific items by ID) or 'search' (which filters), leaving ambiguity about when to use random selection versus targeted retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'get', 'search', or 'list'. Given the sibling tools available, explicit guidance such as 'use this when you need any random art rather than a specific one' would help agent selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'with metadata' which hints at return content, but fails to disclose critical behavioral traits like pagination, result limits, or caching behavior for what could be a large dataset ('all available').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence of six words. It is front-loaded with the action verb and contains no redundant or wasteful text. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema and annotations, the description is insufficient for a complete understanding. While it mentions 'metadata,' it does not describe the return structure, format, or whether the result is an array or object. For a tool returning 'all' items, pagination details should be included.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has zero parameters. According to scoring guidelines, 0 parameters establishes a baseline score of 4. No parameter description is needed or provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (List) and resource (ASCII arts) and adds scope detail ('all available'). However, it does not explicitly differentiate from siblings like 'search' or 'get' in the text, though the tool name helps imply unfiltered enumeration.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives like 'search' (for filtering) or 'get' (for specific items). The description does not mention prerequisites, rate limits, or pagination concerns for retrieving 'all' items.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It explains the transformation (text to ASCII banner) but omits behavioral traits like idempotency, side effects, or explicit output format (string vs. image). For a simple transformation tool, this is minimally adequate.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. The first sentence front-loads the core action (rendering), while the second provides high-value use cases that aid in tool selection. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a low-complexity tool (2 parameters, no nested objects) with 100% schema coverage, the description is sufficiently complete. It could explicitly mention the return type (ASCII string), but the tool's purpose is clear enough given the schema richness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents both parameters (text constraints, font enum values). The description adds no additional parameter semantics, meeting the baseline expectation for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Render') and clearly identifies the resource transformation (text → large ASCII banner) and technology (FIGlet fonts). It distinguishes from siblings like 'kaomoji' (which implies emoticons) by specifying FIGlet/ASCII art.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides implied usage context through 'Great for CLI headers, welcome messages, and titles,' indicating when to use it. However, it lacks explicit when-not guidance or named alternatives (e.g., distinguishing from sibling 'convert' or 'kaomoji').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals the important restriction that built-in arts are protected, but omits whether deletion is permanent, what error occurs when attempting to delete built-in arts, or any permission/ownership requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of exactly two high-value sentences with zero waste: the first front-loads the core action, and the second provides the critical constraint. No redundant or filler text is present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (single string parameter, no output schema, no nested objects), the description adequately covers the primary function and the key business rule regarding built-in arts. It lacks only minor details such as error behavior when attempting to delete protected arts.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the 'id' parameter fully described as 'Art ID to delete'. The description mentions 'by ID' which aligns with the schema, but adds no additional semantic context such as ID format, where to obtain valid IDs, or validation rules beyond the schema itself.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the specific action 'Delete', the resource 'user-submitted ASCII art', and the identifier 'by ID'. The constraint 'Built-in arts cannot be deleted' effectively distinguishes this tool's scope from system-level management, clarifying it only operates on user submissions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides an exclusion criterion ('Built-in arts cannot be deleted') indicating when the tool will fail, but lacks explicit guidance on when to prefer this over siblings (e.g., when to delete vs. edit) or positive usage conditions beyond the implied 'to delete user art'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

artscii MCP server

Copy to your README.md:

Score Badge

artscii MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rxolve/artscii'

If you have feedback or need assistance with the MCP directory API, please join our Discord server