Skip to main content
Glama

Server Details

Tarot MCP — wraps tarotapi.dev (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-tarot
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation3/5

There is significant overlap between draw_cards and random_card, as both draw random cards, though draw_cards allows multiple cards while random_card provides meanings. get_card and search_cards are distinct for retrieving specific or keyword-matching cards, but the random tools could cause confusion.

Naming Consistency4/5

Tool names follow a consistent verb_noun pattern (draw_cards, get_card, random_card, search_cards) with clear actions. The minor deviation is 'random_card' using an adjective instead of a verb, but it still fits the overall readable convention.

Tool Count5/5

With 4 tools, the server is well-scoped for a tarot domain, covering core operations like drawing, retrieving, and searching cards. Each tool has a clear purpose, and the count is appropriate without being too thin or heavy.

Completeness4/5

The tool set covers essential tarot functionalities: drawing random cards, getting specific cards, and searching. A minor gap is the lack of tools for advanced operations like card interpretations or spreads, but basic CRUD-like coverage is adequate for the domain.

Available Tools

4 tools
draw_cardsAInspect

Draw multiple random tarot cards. Count must be between 1 and 78.

ParametersJSON Schema
NameRequiredDescriptionDefault
countYesNumber of cards to draw (1–78).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the action ('draw multiple random tarot cards') and a key constraint (count range), but lacks details on output format, whether draws are with or without replacement, error handling, or other behavioral traits. It adds basic context but is incomplete for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose ('Draw multiple random tarot cards') and follows with a necessary constraint. There is zero waste—every word earns its place, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no annotations, no output schema), the description is minimally adequate. It covers the basic action and parameter constraint but lacks details on output (e.g., card details, format) and behavioral nuances. Without annotations or output schema, more context would improve completeness for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'count' fully documented in the schema as 'Number of cards to draw (1–78)'. The description repeats this range ('Count must be between 1 and 78') without adding further meaning, such as implications of the count or how randomness is applied. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('draw multiple random tarot cards') and resource ('tarot cards'), distinguishing it from siblings like 'get_card' (likely retrieves specific cards), 'random_card' (might draw a single card), and 'search_cards' (searches rather than draws randomly). The verb 'draw' combined with 'random' precisely defines the operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying the count range (1-78), but does not explicitly state when to use this tool versus alternatives like 'random_card' (which might be for single draws) or 'search_cards'. There's no guidance on prerequisites, exclusions, or comparative contexts, leaving the agent to infer based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cardAInspect

Get a specific tarot card by its short name identifier (e.g. "ar01" for The Magician, "ar00" for The Fool, "wap01" for Ace of Wands).

ParametersJSON Schema
NameRequiredDescriptionDefault
name_shortYesThe short name identifier of the card (e.g. "ar01", "ar00", "wap01", "cup10").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clearly describes a read-only operation ('Get') without implying mutation, but does not disclose behavioral traits like error handling (e.g., what happens if the identifier is invalid), authentication needs, or rate limits. It adequately covers the basic operation but lacks deeper context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, parameter usage, and examples without any redundant information. It is front-loaded with the core action and resource, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no annotations, no output schema), the description is complete enough for basic usage. However, it lacks details on return values (since no output schema exists) and does not address potential errors or edge cases, which could be helpful for an agent invoking the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the input schema already fully documents the single parameter 'name_short'. The description adds value by providing specific examples of identifiers (e.g., 'ar01', 'ar00'), which help clarify the expected format, but does not add significant semantic meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('a specific tarot card'), and distinguishes it from siblings by specifying retrieval by short name identifier rather than random drawing or searching. It provides concrete examples of identifiers, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates when to use this tool (to retrieve a specific card by its short name) versus alternatives like 'random_card' (for random selection) or 'search_cards' (for broader queries). However, it does not explicitly name these alternatives or provide exclusion criteria, leaving some inference required.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_cardAInspect

Draw a single random tarot card with its upright and reversed meanings.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the tool's behavior (draws a random card and returns meanings), but does not mention potential traits like randomness source, rate limits, or error conditions. It adds basic context but lacks depth for a higher score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('Draw a single random tarot card') and adds necessary detail ('with its upright and reversed meanings'). Every word earns its place with zero waste, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is complete enough for basic use. It specifies what the tool does and what information is returned. However, without an output schema, it could benefit from more detail on the return format (e.g., structured data or text), preventing a score of 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description does not add parameter details, which is appropriate, but since there are no parameters, it compensates by fully describing the tool's function, warranting a score above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Draw a single random tarot card') and the resource ('tarot card'), including what information is provided ('upright and reversed meanings'). It distinguishes from siblings by specifying 'single random' versus other tools like 'draw_cards' (likely multiple), 'get_card' (likely specific), and 'search_cards' (likely query-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('random tarot card') but does not explicitly state when to use this tool versus alternatives like 'draw_cards' or 'get_card'. It provides clear intent for obtaining a random card with meanings, but lacks explicit exclusions or named alternatives, which would be needed for a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_cardsAInspect

Search tarot cards by keyword — matches against card names and descriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for (e.g. "moon", "strength", "cups").
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It only states the search functionality without mentioning important behavioral aspects like whether this is a read-only operation, what permissions might be required, how results are returned (format, pagination), or any rate limits. For a search tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with a single sentence that contains zero wasted words. It's front-loaded with the core purpose and efficiently communicates the essential information without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for a search tool. It doesn't explain what the tool returns (card objects? just names? full descriptions?), how results are structured, or any behavioral constraints. The description alone doesn't provide enough context for an agent to fully understand how to work with this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal value beyond the input schema, which already has 100% coverage with a clear parameter description. The description mentions 'keyword' which aligns with the schema's 'query' parameter, but doesn't provide additional semantic context about how the search works, what constitutes a match, or search syntax beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('search'), target resource ('tarot cards'), and scope ('by keyword — matches against card names and descriptions'). It distinguishes this tool from its siblings (draw_cards, get_card, random_card) by specifying it's for keyword-based searching rather than random selection or direct retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Search tarot cards by keyword'), which implicitly distinguishes it from siblings that don't involve keyword searching. However, it doesn't explicitly state when NOT to use it or name specific alternatives, keeping it from a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.