Skip to main content
Glama

PaKi Curator — Visual Medicine Art Catalog

Server Details

300 contemplative moving art works by César Yagüe. Search, browse, get recommendations.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
browse_collectionsAInspect

Browse all 13 collections in César Yagüe's catalog. Each collection has a unique curatorial essence, profile, and recommended viewing context.

ParametersJSON Schema
NameRequiredDescriptionDefault
collectionNoGet details of a specific collection by name (optional — omit to list all)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully describes the content structure of what is returned (curatorial essence, profile, viewing context) but omits operational details like read-only safety, pagination behavior, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes scope and action, the second describes data richness. Every word earns its place and the description is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one optional parameter, no nested objects, no output schema), the description adequately covers the domain by explaining what constitutes a collection. It appropriately compensates for missing annotations by describing the return value's content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage and fully describes the optional 'collection' parameter, the description adds valuable cardinality context by specifying '13 collections'. This helps the agent understand the scope when omitting the parameter to list all items.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses collections in César Yagüe's catalog and specifies the exact count (13), providing concrete scope. However, it lacks explicit differentiation from sibling 'catalog_overview' which might also interact with catalog-level data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence implies usage by describing the value of the data returned (curatorial essence, viewing context), suggesting when this tool is valuable. However, it provides no explicit comparison to alternatives like 'catalog_overview' or guidance on when to browse vs search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

catalog_overviewAInspect

Get a high-level overview of the entire art catalog: total works, collections, price range, resolutions available, availability stats, and an introduction to Visual Medicine.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses what data is returned (total works, price ranges, Visual Medicine intro), adding valuable behavioral context. However, it lacks explicit safety information (read-only status, idempotency, side effects) that would be necessary for a complete behavioral profile without annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the core action. Every clause serves a purpose: the action ('Get a high-level overview'), the scope ('entire art catalog'), and the specific return values (colon-separated list). Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description compensates adequately by enumerating the expected return values (stats, ranges, intro). For a simple aggregate retrieval tool, this is sufficient, though an output schema would improve completeness further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to calibration guidelines, 0 params = baseline 4. The description correctly provides no parameter details since none exist, and no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('high-level overview of the entire art catalog') and clearly distinguishes this from siblings by emphasizing 'overview' versus their specific retrieval functions (browse, get, search, recommend). It enumerates specific data points returned (total works, collections, price range, resolutions, availability stats).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description does not explicitly state 'use this when...' or name alternatives, the phrase 'high-level overview of the entire art catalog' provides clear contextual differentiation from siblings that target specific artworks, collections, or searches. The scope is unambiguously broad/summary-level.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_artworkAInspect

Get complete details of a specific artwork by name, including curatorial essence, ideal spaces, vibrational frequency, signature phrase, pricing, and technical specs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesArtwork name or partial name (e.g., "Pure White Rouge", "Marea De Presencia", "Sara")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing what the tool returns. It comprehensively lists return fields (curatorial essence, vibrational frequency, pricing, etc.) which is critical given the lack of output schema. However, it omits safety context (read-only nature) and error behavior (e.g., partial name matching limits).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure front-loads the action ('Get complete details') and efficiently enumerates return fields without filler. Every clause earns its place by conveying either the operation or return value specifics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and lack of output schema, the description appropriately compensates by detailing return fields. However, it lacks error-handling context (what happens when partial matches return multiple results or no matches) given the 'partial name' capability in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the 'name' parameter already documented with examples and partial-name semantics. The description mentions 'by name' but adds no additional syntax, validation rules, or format details beyond the schema. Baseline 3 is appropriate when schema is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with resource 'artwork' and clarifies scope ('complete details'). It implicitly distinguishes from sibling tools: 'specific artwork by name' contrasts with 'search_artworks' and 'browse_collections', while the detailed field list differentiates from 'catalog_overview'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('by name') suggesting use when the artwork name is known, but lacks explicit when-to-use guidance or contrast with 'search_artworks' for fuzzy discovery versus exact retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_for_spaceAInspect

Get art recommendations for a specific space or environment. Describe the space and PaKi will suggest the most fitting works based on curatorial criteria: vibrational frequency, ideal spaces, audience type, and artistic essence.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of recommendations (default: 5, max: 15)
budget_maxNoMaximum budget in EUR (optional)
resolutionNoPreferred resolution: HD, 4K, 8K (optional)
orientationNoPreferred orientation (optional)
space_descriptionYesDescribe the space: type, atmosphere, purpose, size (e.g., "A zen spa with minimalist decor and large walls", "A hospital waiting room that needs calming art", "A luxury hotel lobby in Marbella")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It adds valuable context about recommendation logic ('vibrational frequency', 'ideal spaces', 'audience type') but omits safety profile (read-only vs. destructive), idempotency, or error conditions. No contradiction with annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. Front-loaded with action ('Get art recommendations'), followed by mechanism ('PaKi will suggest...based on...'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for the input side given 100% schema coverage, but lacking return value description and safety/disposition details. Without output schema or annotations, the description should ideally disclose what the recommendations look like (format, content richness).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description mentions 'Describe the space' which aligns with the required parameter, but adds no syntax, format constraints, or semantic details beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') and resource ('art recommendations') with explicit scope ('for a specific space or environment'). The mention of 'curatorial criteria' and 'PaKi' distinguishes this from sibling search/browse tools by emphasizing contextual curation over simple retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context distinguishing it from siblings—focusing on spatial/environmental recommendation versus general search or browsing. However, lacks explicit 'when not to use' guidance or direct comparison to alternatives like search_artworks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_artworksBInspect

Search César Yagüe's art catalog of 300 contemplative moving art works (Visual Medicine). Filter by keywords, collection, resolution, price range, orientation, or type of space. Returns matching artworks with curatorial descriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum results to return (default: 10, max: 50)
queryNoSearch keywords (e.g., "water", "meditation", "golden", "cosmos"). Searches titles, curatorial notes, and keywords.
max_priceNoMaximum price in EUR
min_priceNoMinimum price in EUR
collectionNoFilter by collection name (e.g., "Commonground", "Splendor", "Vida Contemplativa", "Floraciones Del Umbral")
resolutionNoFilter by resolution: HD, 4K, 5K, 8K, 10K, 12K, 16K
space_typeNoType of space to find art for (e.g., "spa", "hotel lobby", "clinic", "meditation room", "corporate office", "residential")
orientationNoFilter by orientation
availabilityNoFilter by availability (default: available)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden of behavioral disclosure; it notes that the tool 'Returns matching artworks with curatorial descriptions' and specifies the catalog size (300 works). However, it omits operational details such as pagination behavior, rate limits, or explicit confirmation that this is a safe, non-destructive read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three efficient, front-loaded sentences: the first establishes domain and scope (artist and catalog), the second lists filtering capabilities, and the third describes the return value. Every sentence earns its place with zero redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive schema coverage (100%) across nine parameters and the absence of an output schema, the description provides adequate context by specifying the artist name, artwork count, and return format (artworks with curatorial descriptions). It sufficiently compensates for missing structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score of 3. The description summarizes the filter categories available (grouping min_price and max_price conceptually as 'price range') but adds minimal semantic value, examples, or usage patterns beyond what the schema already documents for each of the 9 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Search) and resource (César Yagüe's art catalog of 300 contemplative moving art works), clearly defining the tool's scope and domain (Visual Medicine). However, it does not explicitly differentiate from the sibling tool recommend_for_space, which also handles space_type filtering, leaving potential ambiguity about which tool to use for space-related queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description enumerates available filters (keywords, collection, resolution, price range, orientation, or type of space), it provides no explicit guidance on when to use this search tool versus alternatives like browse_collections or recommend_for_space. It lacks criteria for tool selection, such as distinguishing between broad filtering versus curated recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources