Alya — The Hub for Autonomous Agents
Server Details
Operator-as-agent MCP hub. 6 tools. First $5 free, then $0.001/call.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Rupert1987/alya-mcp
- GitHub Stars
- 0
- Server Listing
- alya-hub
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 17 of 17 tools scored. Lowest: 2.9/5.
Each tool has a unique and clearly distinct purpose, from drug interactions to earthquake forecasting to Polymarket analytics. No two tools are easily confused, even within the same prefix group (e.g., seismic_forecast vs. seismic_recent).
Mixing conventions is present: most tools use an alya_ or polymarket_ prefix, but agent_registry, alpaca_paper_status, image_gen, and web_search break the pattern. Within prefixes, naming is generally predictable (verb_noun), but the overall set lacks a unified schema.
17 tools is not excessive numerically, but the server covers an extremely broad, unrelated set of domains. This makes the tool set feel like a grab bag rather than a focused API, reducing its appropriateness for coherent agent use.
Individual domains have glaring gaps: weather only has current conditions, drug interactions only pairwise, Polymarket lacks trading execution, and the agent registry lacks registration/deletion. The coverage is skeletal across the board.
Available Tools
17 toolsagent_registryCInspect
List, look up, and discover other agents in the Alya Hub catalog. Use this to delegate work to specialised agents.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | Required when action=lookup | |
| action | No | list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention that the tool is read-only, safe, or any auth requirements. The description focuses on purpose, not behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, but contains slight redundancy ('list, look up, and discover'). It is front-loaded with the core function, making it easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain return values or how to use the results for delegation. It leaves gaps in understanding for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 50% description coverage, but the description adds no extra meaning beyond what the schema already provides. It references 'list' and 'look up' which match the action enum, but does not clarify slug format or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists and looks up agents in the Alya Hub catalog, with the added context of delegating work to specialized agents. This differentiates it from sibling tools which have unrelated purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests using the tool to delegate work, but does not provide explicit guidance on when not to use it or alternatives. For a straightforward catalog tool, this is adequate but lacks depth.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alpaca_paper_statusAInspect
Get the current Alpaca paper-trading status: equity, cash, open positions, last lessons.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description partially discloses behavior by listing return fields. However, it does not mention authentication requirements, call frequency, or whether the data is live or cached, which are relevant for a trading status tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and enumerates the return fields. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description fully captures the tool's functionality by listing the data it provides. It is adequate for a simple status retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the schema coverage is 100%. The description adds no parameter info because none is needed, earning a baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets the current Alpaca paper-trading status and lists specific data fields (equity, cash, open positions, last lessons). It uses a specific verb and resource, distinguishing it from unrelated siblings like image_gen or web_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives or any exclusions. Siblings are not functionally similar, but no explicit context is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_app_directoryAInspect
Search Father's curated catalog of business/AI/HR/marketing tools (FindMyAppz). Each entry: slug, name, category, tagline, description, status, view count. Use to discover purpose-built tools across HR, marketing, ops, finance, and more — alternative to generic web search when looking for a tool that solves a specific business problem.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (1-50) | |
| category | No | Optional category filter (hr, marketing, finance, ops, ai, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It implies a read-only search but does not explicitly state it is non-destructive or discuss any side effects, rate limits, or authentication needs. Adequate but lacks explicit behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first describes what the tool does and output fields, second gives usage context. No unnecessary words, well-structured, and front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately summarizes the output fields. It could mention pagination or default behavior for the limit parameter, but overall provides sufficient context for a simple search tool. Distinguishes well from sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, as both parameters have descriptions in the input schema. The tool description adds marginal value by listing example categories, but does not significantly enhance understanding beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a curated catalog of business/AI/HR/marketing tools, specifying the fields returned. It distinguishes itself from generic web search, making the purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use to discover purpose-built tools' and 'alternative to generic web search when looking for a tool that solves a specific business problem', providing clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_askAInspect
Ask Alya - the operator agent. Use for general questions, opinions, multi-step reasoning, or to delegate to Alya's internal tools.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | ||
| question | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It mentions delegation to internal tools, which is key, but does not explain potential side effects, cost, response format, or what happens if the agent cannot answer. The transparency is moderate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the tool's identity and core use cases. Every sentence provides value without redundancy or verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a general-purpose agent tool with no output schema, the description lacks details on return values, limitations, or behavior in edge cases. It provides a functional overview but is not fully comprehensive given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% parameter description coverage, so the description should compensate. However, it does not mention or elaborate on any parameters. The schema itself is clear (question and lang), but the description adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Ask Alya - the operator agent' which provides a verb-resource pair, and the use cases (general questions, opinions, reasoning) distinguish it from sibling tools that are task-specific (e.g., web_search, image_gen). However, the verb 'ask' is somewhat vague compared to more specific actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists when to use: for general questions, opinions, multi-step reasoning, or delegation. It implies that for specific tasks like web search or image generation, sibling tools should be used instead, but does not explicitly state when not to use or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_demands_trendingAInspect
Top consumer demands aggregated globally on AskFor — real users paying $1+ to publicly request features, products, or services from companies (Netflix, Apple, governments, etc.). Each demand has: title, target company, supporter count, total revenue. Use to surface unmet market needs, pre-product validation signals, or to generate consumer insights for any brand or category.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max demands (1-50) | |
| category | No | Optional category filter (entertainment, tech, government, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description is the sole source of behavioral info. It indicates the tool returns aggregated data (read-like operation) but does not disclose rate limits, data freshness, or the read-only nature. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are front-loaded with the tool's core purpose and followed by use cases. Every word is purposeful; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters and no output schema, the description sufficiently explains the output structure and use cases. Missing context includes whether results are sorted (e.g., by popularity) and pagination details. Good but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with descriptions. The description adds no extra meaning beyond what the schema provides, such as specifying the format of category or limit behavior. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool aggregates top consumer demands globally from AskFor, listing specific fields (title, target company, supporter count, total revenue) and use cases (surface market needs, validation signals). This distinguishes it from sibling tools like alya_app_directory or alya_weather_now.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises using it to surface unmet market needs, pre-product validation, or generate consumer insights, providing clear context. However, it does not mention when not to use or suggest alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_drug_interactionsAInspect
Check pairwise drug-drug interactions for any 2-10 medications. Returns severity (none/minor/moderate/severe), clinical description, and recommendation per pair. Powered by Symptia's clinical interaction engine. Use for medication safety reviews, polypharmacy checks, or pre-prescription screening. NOT a substitute for licensed medical advice.
| Name | Required | Description | Default |
|---|---|---|---|
| drugs | Yes | 2-10 drug names (generic or brand) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description has the full burden. It explains the output but does not disclose if the tool is read-only, error handling for invalid drugs, or any rate limits. The behavior is implied but not fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: purpose, background, usage context. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description explains the return structure (severity, description, recommendation per pair). It also includes a disclaimer. It could mention whether results are returned as a list but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage and describes the array and drug names. The description adds '2-10 medications' and 'generic or brand', which mostly repeats the schema. So the value added is minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it checks pairwise drug-drug interactions for 2-10 medications and specifies the returned fields (severity, clinical description, recommendation). This distinguishes it from sibling tools like alya_weather_now or polymarket_categorize.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit use cases: medication safety reviews, polypharmacy checks, pre-prescription screening. It also includes a disclaimer about not substituting medical advice. However, it does not explicitly state when not to use or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_gems_recentAInspect
Discover undervalued antiques, collectibles, and rare items priced ≤30% of estimated market value. Powered by GemHunt's eBay/auction scraping engine — each gem includes a gemScore (0-100), category, photos, asking price, and estimated value range based on comparable sales. Use to find arbitrage opportunities or rare finds. Filter by minScore (default 60) for 'strong_gem' status.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max gems to return (1-25) | |
| minScore | No | Minimum gemScore (0-100; ≥60 = strong gem) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains that the tool searches a scraped database and returns items with specific attributes, implying a read-only operation. It does not mention any side effects, destructive actions, or authentication needs, but the description is transparent about its data source and intended usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core purpose and then providing essential details (data source, returned fields, parameter usage). Every sentence earns its place without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 optional params, no output schema), the description comprehensively covers inputs (limit, minScore), outputs (gemScore, category, photos, asking price, value range), and the intended use case. It does not require additional context like rate limits or authentication, as those are not implied by the tool's nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already describes the parameters with defaults and valid ranges. The description adds semantic value by explaining that `minScore` of 60 indicates 'strong_gem' status and contextualizes the threshold. This goes beyond what the schema provides, improving the agent's understanding of parameter significance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as discovering undervalued antiques, collectibles, and rare items priced ≤30% of market value. It specifies the data source (GemHunt's scraped engine) and includes key returned fields (gemScore, category, etc.). The purpose is distinct from sibling tools, which focus on other domains (e.g., earthquakes, weather, stocks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises using the tool to 'find arbitrage opportunities or rare finds' and mentions filtering by minScore for 'strong_gem' status. However, it does not provide explicit when-not-to-use guidance or alternative tools, though the sibling set is diverse and no direct alternative exists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_iconic_clonesAInspect
Browse Sosie's catalog of iconic historical & contemporary figures available as conversational AI clones (Carl Sagan, Napoleon, Nietzsche, etc.). Each entry: name, era, nationality, bio, personality summary, knownFor list, qualityScore (0-100), category. Use to discover figures for research, debate prep, education, or chat use cases.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max clones (1-50) | |
| category | No | Optional filter (legends, scientists, philosophers, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as a browse operation (likely read-only) but does not disclose behavioral traits like authentication needs, rate limits, or destructive potential. The description is adequate but lacks depth for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first defines the tool's function, the second lists fields and use cases. It is front-loaded and concise with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description enumerates the returned fields (name, era, nationality, bio, personality summary, knownFor, qualityScore, category), which covers what an agent needs to understand the output. Limited only by missing pagination or sorting details, which are not critical for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters ('limit' and 'category') have full schema coverage. The description adds context about the catalog content but does not enhance parameter semantics beyond the schema's own descriptions. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool browses a catalog of iconic figures as AI clones, listing the fields included. It is specific and distinguishes from siblings, none of which are similar clone-figures tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases: 'research, debate prep, education, or chat use cases.' It does not specify when not to use or name alternatives, but the sibling list shows no direct competitors, making guidance sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_seismic_forecastAInspect
72-hour earthquake probability forecasts by SeismoAI's LightGBM+XGBoost model (v23.0, 230 features). Each prediction is a 1° grid cell with probabilities for M5.5+, M6.0+, M7.0+ events. Use for risk assessment, insurance pricing, or to surface high-risk regions before events happen.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max predictions (1-50) | |
| minProb55 | No | Min probability of M5.5+ event (0-1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the model version, feature count, grid resolution, and magnitude thresholds. However, it omits limitations such as accuracy metrics, data freshness, or whether the forecast is updated in real-time. The predictive nature is implied but not thoroughly explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first provides core details (model, timeframe, grid, thresholds) and the second suggests use cases. Every word adds value; no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has simple parameters and no output schema. The description explains the output format (1° grid cell with probabilities for three magnitudes). While it lacks details on interpreting probabilities or pagination, the information is sufficient for a well-informed agent to use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters ('limit' and 'minProb55'). The description adds context by mentioning magnitude thresholds (M5.5+, M6.0+, M7.0+), but it does not elaborate on parameter usage beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides 72-hour earthquake probability forecasts using a specific model (SeismoAI's LightGBM+XGBoost). It specifies the resource (earthquake probabilities) and the action (forecast). This distinguishes it from siblings like 'alya_seismic_recent' which likely provides recent earthquake data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists explicit use cases: 'risk assessment, insurance pricing, or to surface high-risk regions before events happen.' However, it does not provide guidance on when not to use the tool or mention alternative tools for related tasks (e.g., 'alya_seismic_recent' for historical data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_seismic_recentAInspect
Recent earthquakes worldwide from SeismoAI's USGS+EMSC+GFZ aggregator. Returns magnitude, location, depth, time, source. Use for real-time seismic monitoring, news, risk assessment, or to verify a felt event.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Lookback hours (1-168) | |
| limit | No | Max results (1-100) | |
| minMagnitude | No | Min magnitude (0-10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as idempotency, rate limits, or behavior with no results. The description only states what it returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with front-loaded purpose and no extraneous information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple query tool with no output schema, the description adequately covers what the tool does and typical use cases. However, it lacks details on return structure or edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no extra meaning beyond the schema, earning a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns recent earthquake data from a specific aggregator (USGS+EMSC+GFZ) with specified fields. The name 'alya_seismic_recent' is distinct from sibling 'alya_seismic_forecast', so purpose is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides appropriate use cases (monitoring, news, risk assessment) but does not explicitly exclude scenarios or mention alternatives like 'alya_seismic_forecast' for forecast data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
alya_weather_nowAInspect
Current weather (temperature °C, humidity %, UV index, condition, icon) for any city worldwide. Powered by Velene's miroir engine (OpenMeteo + Grok-fused). Use for travel planning, agricultural decisions, event scheduling, or as context for other tools.
| Name | Required | Description | Default |
|---|---|---|---|
| city | Yes | City name (e.g. 'Istanbul', 'Tokyo', 'New York') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source ('Velene's miroir engine (OpenMeteo + Grok-fused)') and what data is returned. With no annotations, it carries the burden; it mentions fusion but lacks details on potential limitations like data freshness or rate limits. Still helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. The main output is front-loaded in the first sentence, followed by usage guidance and data source.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 param, no output schema), the description covers purpose, usage, and output fields. Could specify return format but sufficient for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'city,' which already provides examples. The description adds 'for any city worldwide' but no further parameter semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Current weather (temperature °C, humidity %, UV index, condition, icon) for any city worldwide,' using a specific verb and resource. It clearly distinguishes from sibling tools like alya_seismic_forecast and web_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases: 'travel planning, agricultural decisions, event scheduling, or as context for other tools.' While no explicit exclusions or alternatives are given, the context is clear given no other weather tools exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
image_genBInspect
Generate a 1024x1024 image from a text prompt using FLUX.1-schnell. Returns a URL.
| Name | Required | Description | Default |
|---|---|---|---|
| width | No | ||
| height | No | ||
| prompt | Yes | Image prompt |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only states it generates an image and returns a URL, but omits important behavioral traits like rate limits, authentication requirements, or any side effects (e.g., costs, content policies). The model mention adds some value, but overall insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with the main action, and contains only two sentences. Every word adds value—no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low schema coverage (33%) and no output schema, the description should provide more context about parameter usage, model behavior, and output format. It only covers the basic purpose and return type, leaving gaps for a complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 33% (only prompt has a basic description). The description mentions '1024x1024' but does not explain the width and height parameters beyond that, nor does it add detail to the prompt parameter. The default dimensions are noted, but parameter semantics are largely missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's function: generate a 1024x1024 image from a text prompt using FLUX.1-schnell, and that it returns a URL. The verb 'Generate' plus the resource 'image' and model specification makes the purpose very clear. Sibling tools are unrelated, so no differentiation needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool, such as prerequisites, limitations, or alternatives. While siblings are unrelated, the lack of any usage context reduces its helpfulness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
polymarket_categorizeAInspect
Classify any Polymarket market title into Alya's quality categories (wc_future, tail_safe_no, geopolitics, news_event, nba, nhl, mlb, ufc, cs2_intraday, la_liga_singlematch, uncategorized). Returns whether Alya would block the market based on $548 loss-forensics. Use this to pre-screen any market before placing real money.
| Name | Required | Description | Default |
|---|---|---|---|
| titles | Yes | 1..50 market titles to categorize |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It reveals key behavioral traits: classification into categories and a blocking decision based on '$548 loss-forensics.' This goes beyond a simple 'classify' and adds specific financial context, though it could mention rate limits or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the purpose and categories, and includes a clear usage directive. Every word adds value; no filler or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, the description explains that it returns whether Alya would block the market, but it could be more explicit about also returning the category assignments. However, for a simple single-parameter tool, it provides sufficient context for an AI agent to understand the tool's role and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the parameter 'titles' with a description of '1..50 market titles to categorize.' The description adds meaning by indicating what will be done with the titles (classification into categories, blocking decision) and the outcome (blocking flag), which complements the schema well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it classifies Polymarket market titles into specific categories (e.g., wc_future, tail_safe_no) and returns whether Alya would block the market. This verb+resource combination is specific and distinguishes it from sibling tools like polymarket_edge or polymarket_signals, which likely serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: 'Use this to pre-screen any market before placing real money.' It provides clear context but does not mention when not to use it or suggest alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
polymarket_edgeAInspect
PREMIUM ($0.50/call). Alya's live Polymarket edge ranking: top markets where her model disagrees with current price. Built on 6+ months of in-house arb history.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses the tool returns a ranking based on disagreement, which is a read operation. However, it does not explain the meaning of 'edge' or any potential behavioral side effects. It is minimally transparent but not misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately states the action and result. It is front-loaded and contains no extraneous information. Every word is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple parameter, the description provides basic context but leaves gaps. It does not explain the output format, how the ranking is computed, or what 'edge' means. It is complete enough for a simple tool but lacks depth for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has one parameter 'limit' with no description, and the description does not mention it or clarify its purpose. The default value of 5 hints at a count, but the description adds no value beyond the schema. With 0% schema description coverage, the description should compensate, but it fails to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves Alya's edge ranking on Polymarket, specifically the top markets where Alya's model disagrees with current price. It uses a specific verb ('Get') and identifies the resource ('edge ranking'). It distinguishes itself from sibling tools like alya_ask or web_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for finding markets with disagreement, but it does not provide explicit guidance on when to use it versus alternatives, nor does it mention when not to use it. It lacks usage context such as prerequisites or typical scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
polymarket_signalsAInspect
Live Polymarket copy-trading signals from Alya's Tier-A trader watchlist (top-20 all-time + top-5 24h profit), filtered through Alya's category-quality engine ($548 loss-forensics calibrated). Returns BUY signals from the last N hours, BLOCKED categories (nba/nhl/mlb/ufc/cs2_intraday/la_liga_singlematch — proven losers) excluded by default. Each row: trader, market title, side, price, size, our category, suggested edge.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Lookback hours (max 168) | |
| limit | No | Max rows (max 100) | |
| includeBlocked | No | If true, include category-blocked signals (with category tag) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description bears full burden. It discloses the source (Tier-A watchlist, loss-forensics), default exclusions, and output fields. It does not mention safety, but the tool is clearly a read/signal retrieval, not destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is packed with information in two sentences, front-loaded with the main purpose. It could be slightly more concise but is efficient for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description lists output fields (trader, market title, side, etc.) and explains filtering logic (blocked categories, lookback). This covers expected return structure for a signal tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good parameter descriptions. The description adds context by explaining the 'includeBlocked' default and linking hours to lookback. It reinforces the parameter meanings without being redundant.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'Live Polymarket copy-trading signals' from a specific watchlist and quality filter, and lists the output fields. It distinguishes itself from siblings like polymarket_categorize and polymarket_edge by focusing on signals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains default behavior (blocked categories excluded) and parameters (hours, limit, includeBlocked). However, it lacks explicit guidance on when to use this tool versus alternatives, though the purpose implies it's for signals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
polymarket_top_tradersAInspect
Alya's curated Polymarket Tier-A trader leaderboard: union of top-20 all-time profit and top-5 last-24h profit, refreshed every 4h. Each row includes wallet, window (1d/all), rank, and lifetime USD profit. Use to construct your own copy-trading watchlist.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows (max 50) | |
| window | No | both |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden of transparency. It discloses the refresh cadence (every 4h), the composition of the leaderboard (union of top-20 all-time and top-5 last-24h), and the output fields. For a read-only tool, this is adequate but could mention safety or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the purpose, and every word earns its place. It is efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity, the description covers the purpose, refresh rate, output fields, and use case. It could elaborate on the return format (e.g., JSON array) or sorting, but it is sufficiently complete for an agent to understand the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (limit has a description, window does not). The description adds context for the window parameter by mentioning 'window (1d/all)' in the output, but does not explain the 'both' option or provide additional semantics beyond the enum. The limit parameter's schema description is clear, so the description adds marginal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a curated leaderboard of top Polymarket traders, combining top-20 all-time and top-5 last-24h profit, refreshed every 4h. This specific verb-resource combination and the mention of 'copy-trading watchlist' distinguishes it from sibling Polymarket tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly recommends using it to construct a copy-trading watchlist, providing a clear use case. However, it does not explicitly state when not to use it or mention alternatives among siblings, which would improve guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web_searchAInspect
Search the live web in Turkish or English and return a synthesized answer with sources. Powered by Alya's research engine.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Output language | |
| query | Yes | Search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool synthesizes answers with sources, but does not mention read-only nature, rate limits, or authentication needs, which are relevant for behavioral clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loading the action and output. Every word is necessary and no redundant information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity, no output schema, and minimal parameters, the description provides sufficient context: it states the search scope (live web), language options, and output type (synthesized answer with sources). A small improvement would specify the return format (e.g., text with citations), but current version is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions for both parameters. The description adds that search is in Turkish or English, aligning with the enum, but does not provide additional parameter-level details beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search', the resource 'live web', and the output 'synthesized answer with sources'. It also specifies language support (Turkish or English), distinguishing it from sibling tools like alya_ask or agent_registry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for web searching but does not explicitly state when to use this tool versus alternatives like alya_ask, nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.