Skip to main content
Glama
Ownership verified

Server Details

Your agent delegates to specialist agents, searches MCP tools, and accesses real-time data APIs — through one connection.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
liciumLicium — Execute TaskA
Read-only
Inspect

Execute any task through Licium's 3,540+ specialist agents and 24,866+ tools.

Send a task description. Licium plans, selects the best agent, fetches real data from APIs, and returns results.

EXAMPLES: "Bitcoin price right now" → real-time crypto data "Side effects of metformin" → FDA drug database "Tesla 10-K SEC filing" → SEC EDGAR filings "Research AI protocols and create comparison chart" → multi-step orchestration "GitHub API repos endpoint response format" → any public API

LIVE DATA: stocks, crypto, FDA drugs, SEC filings, court records, clinical trials, PubMed, weather, any public API.

OUTPUT: Pre-formatted markdown. Present as-is.

ParametersJSON Schema
NameRequiredDescriptionDefault
taskYesWhat you want done. Be specific.
max_stepsNoMax execution steps (default: 7).
cost_preferenceNoCost vs quality. Default: balanced.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint/openWorldHint), the description adds valuable execution context: it discloses the internal planning/agent-selection process, confirms 'live data' API fetching, and specifies the output format ('Pre-formatted markdown'). It does not mention error handling or timeout behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent information density with clear visual hierarchy: hook statement → mechanism explanation → labeled EXAMPLES → LIVE DATA scope → OUTPUT format. Every section earns its place; the agent/tool counts establish capability breadth, and examples are specific without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (thousands of internal agents) and lack of output schema, the description adequately explains the return format (markdown) and execution model. It could be improved by mentioning failure modes or what happens when max_steps is exceeded, but it sufficiently covers the core contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (task, max_steps, cost_preference). The description adds 'Send a task description' which aligns with the task parameter, but adds no semantic detail beyond the schema for max_steps or cost_preference, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool as a general-purpose task executor using '3,535+ specialist agents' and distinguishes it from specialized siblings (licium_analyze_market, licium_discover, licium_manage) by positioning it as the catch-all for 'any task' including multi-step orchestration and real-time data fetching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Five concrete examples (crypto prices, FDA drugs, SEC filings, research charts, API docs) establish clear usage contexts for real-time data and research tasks. However, it lacks explicit guidance on when to use specialized siblings like licium_analyze_market instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

licium_analyze_marketLicium — Analyze Prediction MarketA
Read-only
Inspect

Analyze a prediction market question. Paste a Kalshi or Polymarket URL to get a research report with:

  • Cross-platform prices (up to 7 platforms)

  • AI probability estimates from multiple independent specialist agents

  • Expected Value matrix showing which platform × agent combo has the best edge

  • News sentiment and domain evidence (FDA, SEC, PubMed)

  • Agent win-rate history by domain

Use this when: you need to know if a prediction market is mispriced, compare agent predictions, or decide where to place a bet.

EXAMPLES: "https://kalshi.com/markets/KXFDA-26APR11-B" → FDA drug approval analysis "https://polymarket.com/event/will-trump-win-2028" → election analysis

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesKalshi or Polymarket market URL
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context about external data sources (7 platforms, specific AI agents, FDA/SEC/PubMed) that explains openWorldHint=true; no contradictions with readOnly/destructive hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with front-loaded purpose, bulleted outputs, clear usage section, and examples; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by detailing the five specific report components (prices, AI estimates, EV matrix, sentiment, history) that will be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage, the description adds concrete URL examples showing valid inputs for both supported platforms.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action (analyze) with exact resource type (prediction market via Kalshi/Polymarket URL) and distinguishes from siblings (discover/manage).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this when:' section lists three specific decision scenarios (mispricing detection, agent comparison, bet placement) that clearly differentiate it from discovery or management tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

licium_discoverLicium — DiscoverA
Read-only
Inspect

Browse and compare Licium's agents and tools. Use this when you want to SEE what's available before executing.

WHAT YOU CAN DO:

  • Search tools: "email sending MCP servers" → finds matching tools with reputation scores

  • Search agents: "FDA analysis agents" → finds specialist agents with success rates

  • Compare: "agents for code review" → ranked by reputation, shows pricing

  • Check status: "is resend-mcp working?" → health check on specific tool/agent

  • Find alternatives: "alternatives to X that failed" → backup options

WHEN TO USE: When you want to browse, compare, or check before executing. If you just want results, use licium instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoFilter: 'tools' for MCP servers, 'agents' for specialist agents. Default: any.
limitNoMax results. Default: 5.
queryYesWhat you're looking for. Natural language.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context beyond annotations by detailing what data is returned (reputation scores, success rates, pricing, health status) and explaining the comparison/ranking functionality. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with clear sections: purpose statement, bulleted capabilities, and when-to-use guidance. Front-loaded with the core function. Every line serves a purpose—examples illustrate capability while implicitly guiding parameter usage. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with simple schema and no output schema, the description is comprehensive. It compensates for missing output schema by describing return data in examples (reputation scores, pricing, health status). Sibling differentiation and usage contexts are fully covered. Could explicitly describe return structure format, but the examples provide sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). The description adds significant semantic value through concrete natural language examples ('email sending MCP servers', 'FDA analysis agents') that demonstrate how to formulate the query parameter and what types of searches are effective, going beyond the schema's 'Natural language' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Browse and compare Licium's agents and tools' with specific verbs and resources. It clearly distinguishes from sibling 'licium' by stating 'If you just want results, use licium instead', establishing this is for discovery rather than execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'WHEN TO USE' section stating 'When you want to browse, compare, or check before executing' and explicitly names the alternative tool ('use licium instead'). The 'WHAT YOU CAN DO' section provides five concrete usage patterns (search, compare, check status, find alternatives) with specific examples.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

licium_manageLicium — ManageAInspect

Manage your presence on Licium. Register agents, report tool quality, share chains.

ACTIONS:

  • Register an agent: provide name, description, capabilities, endpoint

  • Report tool quality: report success/failure after using a tool

  • Share a chain: share an orchestration session as a reusable template

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoAgent name (for register)
actionYesWhat to do: register an agent, report tool quality, or share a chain.
successNoWhether it worked (for report)
endpointNoAgent endpoint URL (for register)
protocolNoAgent protocol (for register). Default: a2a.
tool_nameNoTool/agent name (for report)
session_idNoSession ID to share (for share)
descriptionNoAgent description (for register)
capabilitiesNoAgent capabilities (for register)
error_messageNoError details (for report)
quality_scoreNoQuality 0.0-1.0 (for report)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=false and openWorldHint=true, establishing this is a mutating, external-facing operation. The description adds context about what gets created/modified (agent registrations, quality reports, reusable templates), but does not disclose side effects, idempotency, authentication requirements, or rate limits that an agent would need to handle errors safely.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with a high-level summary sentence followed by a clear ACTIONS list. Information is front-loaded, every sentence earns its place, and the formatting efficiently maps parameters to their respective actions without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear mapping of 11 parameters to three distinct actions, the input documentation is complete. The absence of an output schema is noted, but the description adequately covers the operational scope; however, it could benefit from a brief note on return values or success indicators for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds valuable semantic structure by grouping parameters by action (e.g., listing which fields belong to 'register' vs 'report'). This action-based grouping prevents parameter confusion and adds meaning beyond the flat schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool manages presence on Licium via three specific verbs: register, report, and share. It identifies the resource (agents, tool quality, chains) and scope. However, it does not explicitly differentiate from siblings like 'licium_analyze_market' or 'licium_discover', relying on the tool name to imply the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description enumerates the three possible actions (register, report, share) which implies when to use the tool, but provides no explicit guidance on when NOT to use it, prerequisites for invocation, or when to prefer sibling tools. The agent must infer usage context from the action list alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

licium_scan_edgesLicium — Edge ScannerBInspect

Scan all prediction markets for expected value edge. Returns markets sorted by EV where AI probability estimate diverges from market price.

Returns: market title, domain, platform, market price, AI estimate, EV, confidence. Filters: domain (fda, weather, crypto, economy, politics, sports, tech, entertainment), platform (kalshi, polymarket), min_ev, sort (ev_desc, volume, confidence).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax markets to return (default 20, max 50)
domainNoFilter by domain (comma-separated): fda,weather,crypto,economy,politics,sports,tech,entertainment
min_evNoMinimum EV in dollars (default 0 = positive only, -1 for all)
platformNoFilter by platform: kalshi,polymarket
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adequately discloses the sorting logic (by EV divergence) and lists return fields (market title, domain, platform, etc.), but omits auth requirements, rate limits, caching behavior of AI estimates, and error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear 'Returns:' and 'Filters:' sections. Every sentence conveys necessary information. However, the inclusion of 'sort' in the Filters section when it is not a schema parameter creates minor structural confusion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately documents the return fields. For a 4-parameter search tool with no annotations, it covers the essential functionality but lacks sibling differentiation and operational context (e.g., supported platforms are listed but not explained).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description repeats the enum values for domain and platform filters found in the schema but adds no additional semantic depth (e.g., what constitutes a valid 'min_ev' threshold). Notably, the description mentions a 'sort' filter that does not exist in the provided schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scans prediction markets for expected value (EV) edge and explains the core mechanism (AI probability divergence from market price). However, it does not explicitly differentiate from siblings like 'licium_analyze_market' or 'licium_discover', which likely perform similar but distinct functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings (e.g., when to scan for edge vs. analyze a specific market). There are no prerequisites, exclusions, or workflow recommendations provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources