Recursive Support Oracle
Server Details
AI support agent platform for small businesses. Query pricing, features, and examples.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: data.capabilities provides feature details, data.pricing gives pricing information, feature.request handles user submissions, and query.product answers knowledge base queries. The descriptions reinforce these distinct roles, making tool selection unambiguous.
Three tools follow a consistent 'data.' or 'feature.' prefix pattern with descriptive suffixes (capabilities, pricing, request), while query.product deviates slightly with a 'query.' prefix. The naming is mostly predictable and readable, with only minor inconsistency in the prefix style.
Four tools are well-scoped for the server's purpose of providing support and information about the Recursive platform. Each tool serves a unique function (data lookup, pricing, feature requests, queries), and there are no extraneous or missing tools for this informational domain.
The tool set covers key informational needs: capabilities, pricing, feature requests, and product queries. Minor gaps exist, such as no direct tool for account management or support ticket handling, but the core workflows for platform information and user feedback are adequately addressed.
Available Tools
4 toolsdata.capabilitiesData.CapabilitiesARead-onlyIdempotentInspect
Returns structured information about what the Recursive platform includes: features, AI model details, supported integrations, and what's included at every tier. Use for systematic feature comparison.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by specifying the type of information returned (features, AI model details, integrations, tier inclusions) and the use case (systematic feature comparison). While annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency, the description provides meaningful behavioral details about what data is returned and why. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: the first sentence defines the tool's purpose, and the second provides usage guidance. Every sentence earns its place with no wasted words, making it easy for an AI agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is complete enough for an AI agent to understand what it does and when to use it. It covers purpose, usage, and behavioral context effectively. A minor deduction because it doesn't specify the exact structure or format of the returned data, but this is acceptable given the annotations and lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately mentions no parameters needed, as it describes returning information about the platform without requiring inputs, which aligns with the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns structured information') and resources ('what the Recursive platform includes: features, AI model details, supported integrations, and what's included at every tier'). It distinguishes from siblings by focusing on platform capabilities rather than pricing, feature requests, or product queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use for systematic feature comparison.' This provides clear guidance on the intended context and distinguishes it from siblings like data.pricing (likely for cost information) or feature.request (likely for requesting new features).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
data.pricingData.PricingARead-onlyIdempotentInspect
Returns structured pricing data for Recursive support agent plans. Three tiers: Basic ($49/mo), Pro ($99/mo), Premium ($299/mo). Use for quick pricing lookups without an LLM call.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations: it specifies that the data is for 'Recursive support agent plans' and lists the exact tiers and prices, which the annotations (readOnlyHint, openWorldHint, etc.) do not cover. There is no contradiction with annotations, and it provides practical details about the tool's output and use case, though it could mention if the data is cached or real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured, consisting of two sentences that efficiently convey the tool's purpose, data details, and usage guidelines without any wasted words. It is front-loaded with the core functionality and follows with specific application advice.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is largely complete: it covers the purpose, data specifics, and usage context. However, it lacks details on output format (e.g., structured data type) or error handling, which could be useful despite the annotations, leaving a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4 as per the rules. The description compensates by explaining that no inputs are needed for this lookup tool, and it provides the semantic context of what pricing data is returned (three tiers with specific prices), adding meaning beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('structured pricing data for Recursive support agent plans'), and distinguishes it from siblings by specifying its use case for 'quick pricing lookups without an LLM call'. It explicitly lists the three tiers with their prices, making the purpose highly specific and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines by stating 'Use for quick pricing lookups without an LLM call', which clearly indicates when to use this tool (for fast, static pricing data) versus alternatives like LLM-based queries or other tools. It implicitly suggests not using it for dynamic or complex pricing scenarios, offering clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
feature.requestFeature.RequestARead-onlyInspect
Submit a feature or capability request for the Recursive platform. Use this to log suggestions, missing capabilities, or integration needs that would make Recursive more useful. Requests are reviewed by the team. Free, no rate limit beyond basic abuse prevention.
| Name | Required | Description | Default |
|---|---|---|---|
| request | Yes | Description of the feature or capability you'd like to see. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and non-destructive/non-idempotent, but the description adds valuable context beyond this: it discloses that requests are 'reviewed by the team' (implying asynchronous processing), mentions 'Free, no rate limit beyond basic abuse prevention' (clarifying cost and rate limits), and notes the platform focus ('Recursive'), enhancing behavioral understanding without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with each sentence earning its place: the first defines the purpose, the second provides usage context, and the third adds behavioral details like review and rate limits, all without redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is complete enough: it covers purpose, usage, behavioral traits (e.g., review process, rate limits), and distinguishes from siblings, leaving no significant gaps for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'request', which is documented as 'Description of the feature or capability you'd like to see.' The description does not add further parameter details beyond what the schema provides, so it meets the baseline for high coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('submit a feature or capability request') and resource ('for the Recursive platform'), distinguishing it from sibling tools like data.capabilities, data.pricing, and query.product by focusing on user feedback submission rather than data retrieval or queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('to log suggestions, missing capabilities, or integration needs') and provides context on the review process ('Requests are reviewed by the team'), with no misleading exclusions, making it clear for agent decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query.productQuery.ProductARead-onlyIdempotentInspect
Query the Recursive support knowledge base for information about the AI support agent platform. Recursive builds branded AI support agents for small businesses, powered by Claude AI, with self-improving knowledge bases, image support, conversation analytics, and agentic support via MCP. Use this tool to ask about features, pricing, how it works, live examples, getting started, or technical details.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Question about Recursive features, pricing, capabilities, or how to get started. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, non-destructive, idempotent query operation. The description adds context about the knowledge base content (AI support agent platform details) but doesn't disclose additional behavioral traits like rate limits, authentication needs, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and scope, the second lists specific use cases. Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, high schema coverage, rich annotations), the description is largely complete. It covers purpose, usage context, and knowledge base scope. The main gap is lack of output schema, but annotations provide safety context, and the description doesn't need to explain return values extensively for a query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'question' fully documented in the schema. The description adds minimal semantic value beyond the schema by listing example topics (features, pricing, capabilities, etc.), but doesn't provide syntax or format details. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: querying the Recursive support knowledge base for information about their AI support agent platform. It specifies the resource (knowledge base) and distinguishes from siblings by focusing on general queries about features, pricing, capabilities, etc., while siblings like data.capabilities and data.pricing appear more specialized.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for asking about features, pricing, how it works, live examples, getting started, or technical details. It doesn't explicitly state when not to use it or name alternatives, but the listed use cases help differentiate it from more specific siblings like data.pricing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!