Skip to main content
Glama

Server Details

Operator-as-agent MCP hub. 6 tools: web_search, image_gen, polymarket_edge, alpaca_paper_status, alya_ask, agent_registry. First $5 free, then $0.001/call. Operated by Alya, an autonomous agent.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 17 of 17 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation4/5

Most tools target distinct domains (weather, earthquakes, drug interactions, etc.), but 'alya_ask' and 'web_search' both provide general question-answering, creating slight overlap. The Polymarket tools and other Alya tools are well-differentiated.

Naming Consistency3/5

Naming conventions are mixed: some tools start with 'alya_', others with 'polymarket_', and a few (agent_registry, image_gen, web_search) have no prefix. All use snake_case, but prefix inconsistency reduces predictability.

Tool Count3/5

17 tools is borderline high but acceptable. However, the scope is extremely broad, spanning weather, antifools, prediction markets, and more, making the set feel like a collection of microservices rather than a focused server.

Completeness2/5

Each subdomain has only one or two tools (e.g., only current weather, no forecast; only drug interaction checks, no other medication info). Workflows are shallow, and many gaps exist for complete task coverage.

Available Tools

17 tools
agent_registryAInspect

List, look up, and discover other agents in the Alya Hub catalog. Use this to delegate work to specialised agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoRequired when action=lookup
actionNolist
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description does not disclose whether operations are read-only, destructive, or any side effects. No mention of authorization or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the purpose with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 2 params, no output schema, no annotations. Description omits behavioral details, output format, and guidance on selecting between list and lookup actions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%; description fails to add meaning beyond the schema. Does not explain the difference between 'list' and 'lookup' actions or how to use the slug parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb (list, look up, discover) and resource (other agents in Alya Hub catalog). Distinct from sibling tools like web_search or image_gen.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to delegate work to specialised agents', providing clear context for when to use the tool. No exclusions or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alpaca_paper_statusAInspect

Get the current Alpaca paper-trading status: equity, cash, open positions, last lessons.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It only states a read operation ('Get') but omits details like rate limits, authentication needs, data freshness, or potential latency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 12-word sentence with no redundancy. It front-loads the key information: what the tool does and what it returns.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema and no annotations, the description provides minimal context. It identifies the output fields but does not specify format, units, or data source. While adequate for a simple status check, it could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters, so the baseline is 4. The description adds meaning by enumerating the output fields (equity, cash, open positions, last lessons), which helps the agent understand what data to expect.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and identifies the exact resource: 'current Alpaca paper-trading status', listing key fields (equity, cash, open positions, last lessons). This clearly distinguishes it from sibling tools like agent_registry or web_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_app_directoryAInspect

Search Father's curated catalog of business/AI/HR/marketing tools (FindMyAppz). Each entry: slug, name, category, tagline, description, status, view count. Use to discover purpose-built tools across HR, marketing, ops, finance, and more — alternative to generic web search when looking for a tool that solves a specific business problem.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-50)
categoryNoOptional category filter (hr, marketing, finance, ops, ai, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the tool searches a curated catalog and lists result fields, implying a read-only operation. While it doesn't mention authorization or rate limits, the behavioral context is adequate for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no unnecessary words. It front-loads the core function and follows with usage guidance, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with two optional parameters and no output schema, the description is fairly complete. It explains the source, result contents, and use case. Some details like sorting or pagination are missing but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameters are well-documented in the schema (limit with range, category with examples). The description reinforces the category parameter by listing example categories, but adds little beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a curated catalog of business tools (FindMyAppz) and specifies what each entry contains (slug, name, category, tagline, etc.). It distinguishes from sibling 'web_search' by positioning itself as an alternative for specific business tool discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use it for discovering purpose-built tools as an alternative to generic web search when solving a specific business problem. This provides clear context, though it lacks explicit when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_askBInspect

Ask Alya - the operator agent. Use for general questions, opinions, multi-step reasoning, or to delegate to Alya's internal tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
questionYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It mentions delegation to internal tools but lacks details on behavioral traits such as required permissions, rate limits, error handling, or response format. The description is vague about how the tool operates beyond its purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences that front-load the tool's name and role, followed by specific use cases. No extraneous information, every sentence contributes to understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple agent tool with 2 parameters and no output schema, the description adequately covers core purpose and use cases. However, it lacks guidance on parameter usage (especially lang) and expected response behavior, leaving gaps for an agent to resolve.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters (lang, question) with 0% schema description coverage. The tool description does not mention or explain these parameters, providing no additional meaning beyond the field names and enum values. An agent must infer usage from the names alone, which is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a general question-answering agent ('Ask Alya - the operator agent') and specifies use cases: general questions, opinions, multi-step reasoning, and delegation to internal tools. This effectively distinguishes it from sibling tools like web_search or image_gen which are specialized.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance on when to use ('for general questions, opinions, multi-step reasoning, or to delegate to Alya's internal tools'), but does not explicitly state when not to use or name alternatives. An agent can infer that for specialized tasks, sibling tools are preferred, but this is not made explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_drug_interactionsAInspect

Check pairwise drug-drug interactions for any 2-10 medications. Returns severity (none/minor/moderate/severe), clinical description, and recommendation per pair. Powered by Symptia's clinical interaction engine. Use for medication safety reviews, polypharmacy checks, or pre-prescription screening. NOT a substitute for licensed medical advice.

ParametersJSON Schema
NameRequiredDescriptionDefault
drugsYes2-10 drug names (generic or brand)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description fully discloses returns (severity, description, recommendation), source (Symptia engine), and limitation (not medical advice). Good transparency for a safety-critical tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus a disclaimer, each sentence serves a purpose. Front-loaded with core function, very efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description adequately describes output fields. Lacks format details, but sufficient for single-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage 100% with clear description of 'drugs' parameter. Description adds '2-10' and 'pairwise' but does not significantly enhance schema meaning beyond reiteration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Starts with specific verb 'Check' and resource 'pairwise drug-drug interactions', clearly defining scope (2-10 medications). Unambiguous and unique among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases ('medication safety reviews, polypharmacy checks, or pre-prescription screening') and a disclaimer about not substituting medical advice. No exclusions needed given specificity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_gems_recentAInspect

Discover undervalued antiques, collectibles, and rare items priced ≤30% of estimated market value. Powered by GemHunt's eBay/auction scraping engine — each gem includes a gemScore (0-100), category, photos, asking price, and estimated value range based on comparable sales. Use to find arbitrage opportunities or rare finds. Filter by minScore (default 60) for 'strong_gem' status.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax gems to return (1-25)
minScoreNoMinimum gemScore (0-100; ≥60 = strong gem)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose behavioral traits. It reveals the data source (GemHunt scraping) and return fields but does not state that the tool is read-only, discuss rate limits, or describe side effects, leaving gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) with a clear front-loaded purpose, followed by engine details, use case, and filter instruction. No extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, the description adequately covers the purpose, return content (gem fields), and filter usage. It misses pagination details but limit parameter is documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters documented. The description adds minimal value beyond schema by restating minScore's role in 'strong_gem' status, but does not enhance understanding of the limit parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool discovers undervalued antiques, collectibles, and rare items, with specific criteria and data sources. It distinguishes itself from sibling tools like alya_seismic_recent by focusing on gem/arbitrage opportunities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Use to find arbitrage opportunities or rare finds,' providing a clear context for use. However, it does not mention when not to use this tool or compare it to alternatives like other discovery tools on the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_iconic_clonesAInspect

Browse Sosie's catalog of iconic historical & contemporary figures available as conversational AI clones (Carl Sagan, Napoleon, Nietzsche, etc.). Each entry: name, era, nationality, bio, personality summary, knownFor list, qualityScore (0-100), category. Use to discover figures for research, debate prep, education, or chat use cases.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax clones (1-50)
categoryNoOptional filter (legends, scientists, philosophers, etc.)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It implies a safe read operation but does not explicitly state it is read-only or non-destructive. It also does not mention ordering or pagination beyond the limit parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the main purpose, followed by return fields and use cases. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple catalog tool, the description covers purpose, parameters, and return fields adequately. It could mention if authentication is needed or ordering, but overall it is complete enough for the low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds minimal extra meaning about parameters. It provides context like example categories but does not improve on what the schema already provides, meeting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses a catalog of iconic figures, listing specific examples (Carl Sagan, Napoleon, Nietzsche) and the fields returned. This differentiates it from sibling tools like web_search or polymorph markets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains use cases (research, debate prep, education, chat) and implicitly tells when to use it. It does not explicitly contrast with siblings, but the sibling tools are sufficiently different that confusion is unlikely.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_seismic_forecastAInspect

72-hour earthquake probability forecasts by SeismoAI's LightGBM+XGBoost model (v23.0, 230 features). Each prediction is a 1° grid cell with probabilities for M5.5+, M6.0+, M7.0+ events. Use for risk assessment, insurance pricing, or to surface high-risk regions before events happen.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax predictions (1-50)
minProb55NoMin probability of M5.5+ event (0-1)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description effectively communicates the tool's behavior: it returns probabilistic forecasts with a specific time horizon, model, and spatial resolution. It discloses model version and feature count, and does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core function, and contains no redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately explains the return format (1° grid, probabilities for three magnitudes). The parameters are well-covered in the schema. The description could optionally mention spatial coverage or update frequency, but it is sufficient for the task.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for both parameters (limit and minProb55) with clear descriptions. The tool description does not add extra meaning beyond the schema but does contextualize the output (probabilities for three magnitude thresholds). The minProb55 parameter only mentions M5.5+, while the description lists M6.0+ and M7.0+ as well, which may cause slight ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it produces 72-hour earthquake probability forecasts using a specific model (LightGBM+XGBoost v23.0 with 230 features) and specifies output format (1° grid cells, probabilities for M5.5+, M6.0+, M7.0+). This distinguishes it from sibling tools like alya_seismic_recent (recent earthquakes) and others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly suggests use cases: 'risk assessment, insurance pricing, or to surface high-risk regions before events happen.' It does not state when not to use or mention alternatives, but the context of sibling tools implies distinction from recent earthquake data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_seismic_recentAInspect

Recent earthquakes worldwide from SeismoAI's USGS+EMSC+GFZ aggregator. Returns magnitude, location, depth, time, source. Use for real-time seismic monitoring, news, risk assessment, or to verify a felt event.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (1-168)
limitNoMax results (1-100)
minMagnitudeNoMin magnitude (0-10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must convey behavior. It implies a read-only query returning earthquake data. It does not detail any side effects, rate limits, or prerequisites, but the straightforward nature of the tool makes this adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, front-loaded with key information. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters with full schema descriptions and no output schema, the description covers purpose, use cases, and returned fields. It could mention return format (list) or more on magnitude range, but overall it's sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (hours, limit, minMagnitude all have descriptions). The tool description does not add further parameter semantics beyond what the schema already provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves recent earthquakes worldwide from specific aggregators (USGS+EMSC+GFZ) and lists returned fields (magnitude, location, depth, time, source). It distinguishes from sibling 'alya_seismic_forecast' by focusing on recent events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: real-time monitoring, news, risk assessment, verification of felt events. Does not explicitly mention when not to use or alternative tools, but the sibling name implies forecast usage is separate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

alya_weather_nowAInspect

Current weather (temperature °C, humidity %, UV index, condition, icon) for any city worldwide. Powered by Velene's miroir engine (OpenMeteo + Grok-fused). Use for travel planning, agricultural decisions, event scheduling, or as context for other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityYesCity name (e.g. 'Istanbul', 'Tokyo', 'New York')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries burden. Mentions data source (Velene's miroir engine) but does not disclose limitations like update frequency or accuracy.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences that front-load key information with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema but description lists output fields (temp, humidity, UV, condition, icon). Adequate for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter 'city' with example values in description. Schema coverage is 100%, and description adds clarifying examples beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves current weather for any city and lists specific output fields. Distinguishes from siblings as the only weather-focused tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases like travel planning and agricultural decisions. Lacks exclusions or alternatives, but no similar tools exist so guidance is adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

image_genCInspect

Generate a 1024x1024 image from a text prompt using FLUX.1-schnell. Returns a URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
widthNo
heightNo
promptYesImage prompt
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. Only states 'Returns a URL' but omits behavioral traits like processing time, cost, content filters, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently conveys core purpose and output. Could be slightly more structured but wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema and has minimal schema descriptions. Does not clarify response format, error cases, or any constraints beyond dimensions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 33% (only prompt has a description). Description adds no extra meaning beyond repeating default dimensions; width/height parameters remain undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (generate), resource (image), dimensions (1024x1024), model (FLUX.1-schnell), and output (URL). Completely distinguishes from unrelated sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Mentions the model but no conditions, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

polymarket_categorizeAInspect

Classify any Polymarket market title into Alya's quality categories (wc_future, tail_safe_no, geopolitics, news_event, nba, nhl, mlb, ufc, cs2_intraday, la_liga_singlematch, uncategorized). Returns whether Alya would block the market based on $548 loss-forensics. Use this to pre-screen any market before placing real money.

ParametersJSON Schema
NameRequiredDescriptionDefault
titlesYes1..50 market titles to categorize
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It reveals the tool returns categories and a block decision based on '$548 loss-forensics', offering some transparency. However, it does not explain error handling, edge cases, or how the block decision is determined in detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences. The first sentence lists categories, the second gives usage context. No redundant or missing information; every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is fairly complete: it specifies input, output (categories and block decision), and usage. It could be improved by briefly describing the output format (e.g., list of objects with title, category, block status).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds meaningful context by listing the possible categories and explaining the tool's purpose. This goes beyond the schema's basic specification of an array of strings.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool classifies Polymarket market titles into specific categories and indicates whether Alya would block the market. This distinguishes it from sibling tools like polymarket_edge or polymarket_signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly advises using the tool to pre-screen markets before placing real money, providing clear context. However, it does not mention when not to use it or suggest alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

polymarket_edgeCInspect

PREMIUM ($0.50/call). Alya's live Polymarket edge ranking: top markets where her model disagrees with current price. Built on 6+ months of in-house arb history.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior but only states the purpose. It does not mention read-only nature, authentication needs, rate limits, or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence with no filler. Front-loaded with the action ('Get') and resource ('edge ranking'). Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain return values but does not. It also lacks context about what 'edge' means beyond disagreement. Incomplete for a tool with a single optional parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not mention the 'limit' parameter at all. Schema description coverage is 0%, so the agent gets no guidance on parameter meaning or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get Alya's current edge ranking on Polymarket: top markets where Alya's model disagrees with current price.' It specifies the verb (Get) and resource (edge ranking), and distinguishes from siblings like alya_ask and web_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like alya_ask or web_search. Lacks explicit context or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

polymarket_signalsAInspect

Live Polymarket copy-trading signals from Alya's Tier-A trader watchlist (top-20 all-time + top-5 24h profit), filtered through Alya's category-quality engine ($548 loss-forensics calibrated). Returns BUY signals from the last N hours, BLOCKED categories (nba/nhl/mlb/ufc/cs2_intraday/la_liga_singlematch — proven losers) excluded by default. Each row: trader, market title, side, price, size, our category, suggested edge.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoLookback hours (max 168)
limitNoMax rows (max 100)
includeBlockedNoIf true, include category-blocked signals (with category tag)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It describes the data source and filtering logic but does not explicitly state that the tool is read-only, lacks rate limits, or other behavioral traits. However, it does disclose the exclusion of blocked categories and the output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the tool's purpose, and efficiently covers the source, filtering, and output format without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description enumerates each field in a row (trader, market title, side, price, size, category, suggested edge), fully informing the agent of the return structure. All three parameters are explained, making the tool complete for its intended use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds context for each parameter: 'hours' as lookback, 'limit' as max rows, and 'includeBlocked' to include blocked categories with a tag. It names blocked categories as 'proven losers', adding meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns live copy-trading BUY signals from a specific watchlist, filtered by a quality engine, with blocked categories excluded. It distinguishes itself from siblings like polymarket_top_traders or polymarket_edge by specifying the signal filtering and output fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly tells when to use the tool (to get filtered signals from top traders). It does not explicitly mention alternatives or when not to use, but the context is clear enough for an AI agent to infer appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

polymarket_top_tradersAInspect

Alya's curated Polymarket Tier-A trader leaderboard: union of top-20 all-time profit and top-5 last-24h profit, refreshed every 4h. Each row includes wallet, window (1d/all), rank, and lifetime USD profit. Use to construct your own copy-trading watchlist.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax rows (max 50)
windowNoboth
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the refresh interval (every 4h), the union logic (top-20 all-time + top-5 last-24h), and the data fields returned. While it doesn't discuss rate limits or authorization, the tool is read-only and non-destructive; the description is sufficient for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero wasted words. First sentence defines the tool's output, refresh, and columns; second sentence states its use case. Highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with two parameters and no output schema, the description adequately covers the union logic, refresh timing, output fields, and intended use. No significant gaps remain for typical usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already documents limit and window parameters with defaults and constraints. The description adds value by explaining the output structure (wallet, window, rank, lifetime profit) and the refresh behavior, which is not present in the schema. With no output schema, this extra context is valuable for interpretation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it returns a curated leaderboard of Polymarket top traders, using a union of top-20 all-time and top-5 last-24h profit, refreshed every 4h. It specifies output columns (wallet, window, rank, lifetime USD profit) and its intent for copy-trading watchlists. This clearly distinguishes it from sibling tools like polymarket_signals or polymarket_categorize.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states when to use the tool: 'Use to construct your own copy-trading watchlist.' It does not explicitly mention when not to use it or compare to alternatives, but the stated use case is specific and helpful. Minor deduction for lacking exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources