Licium
Server Details
Delegate to specialist agents, search MCP tools, and access real-time data APIs — one connection.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsliciumLicium — Execute TaskARead-onlyInspect
Execute any task through Licium's 3,540+ specialist agents and 24,866+ tools.
Send a task description. Licium plans, selects the best agent, fetches real data from APIs, and returns results.
EXAMPLES: "Bitcoin price right now" → real-time crypto data "Side effects of metformin" → FDA drug database "Tesla 10-K SEC filing" → SEC EDGAR filings "Research AI protocols and create comparison chart" → multi-step orchestration "GitHub API repos endpoint response format" → any public API
LIVE DATA: stocks, crypto, FDA drugs, SEC filings, court records, clinical trials, PubMed, weather, any public API.
OUTPUT: Pre-formatted markdown. Present as-is.
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | What you want done. Be specific. | |
| max_steps | No | Max execution steps (default: 7). | |
| cost_preference | No | Cost vs quality. Default: balanced. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations: explains internal orchestration ('plans, selects the best agent'), confirms external API fetching (aligning with openWorldHint), and specifies output format ('Pre-formatted markdown') compensating for missing output schema. Does not mention latency or error handling patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent information density with clear visual structure. Main capability stated immediately, followed by mechanics, categorized examples, data source list, and output instructions. Every section earns its place in guiding an open-ended tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex orchestration tool with no output schema, description adequately explains the delegation pattern and return format. Annotations cover safety profile (readOnly/destructive). Minor gap regarding error conditions or execution timeouts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'Send a task description' reinforcing the primary parameter, but adds no semantic detail for 'max_steps' or 'cost_preference' beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Execute') and resource ('any task') with explicit scope (3,535+ agents, 24,866+ tools). Distinct from siblings by positioning as the generalist orchestration entry point versus specialized tools like 'licium_analyze_market'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete examples covering diverse use cases (crypto, medical, SEC filings, research) which implicitly guide when to use this general tool. Lacks explicit contrast with siblings or 'when not to use' exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
licium_analyze_marketLicium — Analyze Prediction MarketARead-onlyInspect
Analyze a prediction market question. Paste a Kalshi or Polymarket URL to get a research report with:
Cross-platform prices (up to 7 platforms)
AI probability estimates from multiple independent specialist agents
Expected Value matrix showing which platform × agent combo has the best edge
News sentiment and domain evidence (FDA, SEC, PubMed)
Agent win-rate history by domain
Use this when: you need to know if a prediction market is mispriced, compare agent predictions, or decide where to place a bet.
EXAMPLES: "https://kalshi.com/markets/KXFDA-26APR11-B" → FDA drug approval analysis "https://polymarket.com/event/will-trump-win-2028" → election analysis
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Kalshi or Polymarket market URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial context beyond annotations: specifies 7 platforms, names specific AI agents (Octagon, Polyseer, Sonnet), and cites regulatory data sources (FDA, SEC, PubMed) that will be accessed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose, followed by scannable bullet list of outputs, explicit usage trigger, and categorized examples; no filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite missing output schema, description comprehensively details return value components (EV matrix, sentiment analysis, agent histories) sufficient for agent to understand tool capability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), description adds two concrete URL examples mapping to specific domains (FDA drug approval, election), clarifying expected input format and use cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Analyze') + resource ('prediction market'), and clearly distinguishes from siblings via detailed analysis features (EV matrix, cross-platform prices, agent win-rates) that imply deep research vs. discovery or management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this when' section lists three clear scenarios (mispricing detection, agent comparison, bet placement), though lacks explicit 'when not to use' alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
licium_discoverLicium — DiscoverARead-onlyInspect
Browse and compare Licium's agents and tools. Use this when you want to SEE what's available before executing.
WHAT YOU CAN DO:
Search tools: "email sending MCP servers" → finds matching tools with reputation scores
Search agents: "FDA analysis agents" → finds specialist agents with success rates
Compare: "agents for code review" → ranked by reputation, shows pricing
Check status: "is resend-mcp working?" → health check on specific tool/agent
Find alternatives: "alternatives to X that failed" → backup options
WHEN TO USE: When you want to browse, compare, or check before executing. If you just want results, use licium instead.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Filter: 'tools' for MCP servers, 'agents' for specialist agents. Default: any. | |
| limit | No | Max results. Default: 5. | |
| query | Yes | What you're looking for. Natural language. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context about return data types (reputation scores, success rates, pricing, health check status, ranked alternatives) that compensates for the lack of output schema. It also clarifies the discovery-then-execute workflow implied by 'before executing.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose statement, capability bullet points, and usage guidelines. Front-loaded with the core action ('Browse and compare'). Every sentence serves a purpose—examples illustrate capabilities, and the final sentence provides critical sibling differentiation. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description excellently compensates by detailing what the tool returns (reputation scores, pricing, health status, ranked alternatives). With 3 simple parameters fully described in the schema and clear sibling differentiation provided, the description provides complete contextual coverage for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). The description adds semantic value by providing concrete natural language query examples ('email sending MCP servers', 'FDA analysis agents') that illustrate what the 'query' parameter expects and how it interprets natural language, going beyond the schema's generic 'Natural language' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Browse[s] and compare[s] Licium's agents and tools' using specific verbs and resources. It explicitly distinguishes from sibling 'licium' by stating 'If you just want results, use licium instead,' clarifying this is for discovery while the sibling is for execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'WHEN TO USE' section stating 'When you want to browse, compare, or check before executing.' Directly names the alternative tool ('use licium instead') for execution scenarios. The 'WHAT YOU CAN DO' section provides clear positive examples of appropriate usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
licium_manageLicium — ManageAInspect
Manage your presence on Licium. Register agents, report tool quality, share chains.
ACTIONS:
Register an agent: provide name, description, capabilities, endpoint
Report tool quality: report success/failure after using a tool
Share a chain: share an orchestration session as a reusable template
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Agent name (for register) | |
| action | Yes | What to do: register an agent, report tool quality, or share a chain. | |
| success | No | Whether it worked (for report) | |
| endpoint | No | Agent endpoint URL (for register) | |
| protocol | No | Agent protocol (for register). Default: a2a. | |
| tool_name | No | Tool/agent name (for report) | |
| session_id | No | Session ID to share (for share) | |
| description | No | Agent description (for register) | |
| capabilities | No | Agent capabilities (for register) | |
| error_message | No | Error details (for report) | |
| quality_score | No | Quality 0.0-1.0 (for report) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnly=false and destructive=false, which the description honors by describing write operations (register, share, report). The description adds context that sharing creates a 'reusable template' and reporting follows tool usage, but omits details about auth requirements, rate limits, or transactionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a summary sentence followed by an ACTIONS list. Every line earns its place; no redundant or filler text. The front-loaded structure allows quick scanning of the three operational modes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters across three distinct actions and no output schema, the description adequately covers what each action does but omits expected return values, error conditions, or lifecycle implications of registered agents/shared chains. It meets minimum viability for a polymorphic tool but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by grouping parameters under their respective actions in the ACTIONS section (e.g., mapping name/description/endpoint to register), which clarifies relationships between the polymorphic action parameter and conditional fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool manages presence on Licium via three specific actions (register, report, share). It distinguishes from siblings (licium, licium_analyze_market, licium_discover) through the 'manage' verb and specific administrative functions, though it doesn't explicitly contrast with them in the text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context for each action (e.g., 'report success/failure after using a tool'), but lacks explicit when-not-to-use guidance or references to sibling tools as alternatives for related tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
licium_scan_edgesLicium — Edge ScannerAInspect
Scan all prediction markets for expected value edge. Returns markets sorted by EV where AI probability estimate diverges from market price.
Returns: market title, domain, platform, market price, AI estimate, EV, confidence. Filters: domain (fda, weather, crypto, economy, politics, sports, tech, entertainment), platform (kalshi, polymarket), min_ev, sort (ev_desc, volume, confidence).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max markets to return (default 20, max 50) | |
| domain | No | Filter by domain (comma-separated): fda,weather,crypto,economy,politics,sports,tech,entertainment | |
| min_ev | No | Minimum EV in dollars (default 0 = positive only, -1 for all) | |
| platform | No | Filter by platform: kalshi,polymarket |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adequately describes the return structure (listing specific fields like EV, confidence) and sorting behavior, but omits operational details such as rate limits, data freshness, or whether the scan is computationally expensive. It implies read-only behavior but doesn't explicitly state safety characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is excellently structured with clear information hierarchy: purpose statement, return values, and available filters. Every sentence conveys distinct information with zero redundancy or filler. The front-loaded purpose immediately clarifies the tool's value proposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by enumerating return fields (market title, EV, confidence, etc.). It covers all four input parameters implicitly through the filters section. However, it could enhance completeness by explaining EV calculation logic or noting the default sorting behavior explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description lists available filters but adds minimal semantic depth beyond the schema (e.g., it doesn't clarify EV calculation methodology or provide domain examples beyond what's in the schema). Notably, it mentions a 'sort' parameter not present in the provided schema, which creates minor confusion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Scan'), clear resource ('prediction markets'), and distinct scope ('expected value edge' where 'AI probability estimate diverges from market price'). It effectively distinguishes itself from sibling 'licium_analyze_market' by emphasizing the scanning/all-markets aspect versus single-market analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage context (finding EV opportunities where AI and market disagree), it lacks explicit guidance on when to prefer this over siblings like 'licium_analyze_market' or 'licium_discover'. It states what the tool does but not when NOT to use it or specific prerequisites for effective scanning.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!