Ragora
Server Details
Search your knowledge bases from any AI assistant using hybrid RAG.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- velarynai/ragora-mcp
- GitHub Stars
- 0
- Server Listing
- ragora
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsdiscover_collectionsARead-onlyInspect
Discover all knowledge bases you have access to.
Returns collection names, descriptions, content types, stats, available operations, and usage examples for each collection. Call this first to understand what data is available before searching.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
ReadOnly annotation confirms safety, while description adds substantial context: specifies return payload structure (names, descriptions, content types, stats, available operations, usage examples) and access scope ('you have access to'). Does not mention rate limits or pagination, but output schema handles return details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences structured logically: purpose, return values, usage guidance. No redundant text; 'Call this first' front-loads the critical workflow instruction. Efficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully adequate for a zero-parameter discovery tool. ReadOnly annotation covers safety profile; output schema exists to document returns; description provides sufficient overview of what gets discovered and workflow positioning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, which per guidelines warrants a baseline score of 4. No parameter description is required or possible given the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Discover' with resource 'knowledge bases' (synonymous with collections). It distinguishes from siblings 'search' and 'search_collection' by clarifying this returns metadata (names, descriptions, stats) rather than content search results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Call this first... before searching,' establishing the prerequisite workflow order and implicitly directing the agent to use sibling 'search' tools only after this discovery step. Clear temporal guidance on when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchARead-onlyInspect
Search across ALL your knowledge bases at once.
Use this when you want broad results or aren't sure which collection has the answer. For targeted search in a specific collection, use search_collection() instead.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Natural language search query. | |
| top_k | No | Maximum number of results to return (default 5). |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, covering the safety profile. The description adds valuable behavioral context about the federated scope ('across ALL'), but does not elaborate on search methodology (semantic vs keyword), ranking behavior, or performance characteristics. With output schema present, this is acceptable coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences that are perfectly structured: purpose declaration first, usage context second, and sibling alternative third. No redundancy or wasted words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (readOnlyHint), output schema, and only two simple parameters, the description successfully covers purpose, scope, and usage differentiation. It adequately prepares the agent to invoke the tool correctly without needing to describe return values (handled by output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'Natural language search query' and 'Maximum number of results' clearly documented in the schema. The description does not add parameter syntax, formatting details, or examples beyond what the schema provides, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), resource ('knowledge bases'), and crucial scope ('ALL...at once'). It effectively distinguishes this from the sibling tool search_collection by emphasizing the broad, cross-collection nature versus targeted search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('when you want broad results or aren't sure which collection has the answer') and explicitly names the alternative tool for the opposite case ('For targeted search in a specific collection, use search_collection() instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_collectionARead-onlyInspect
Search a specific knowledge base by name.
Use discover_collections() first to find available collection names.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Natural language search query. | |
| top_k | No | Maximum number of results to return (default 5). | |
| filters | No | Optional metadata filters. Use discover_collections() to see available filter fields for each collection. | |
| custom_tags | No | Optional explicit tags to scope retrieval. Use discover_collections() to see available tags for each collection. | |
| collection_name | Yes | Human-readable collection name or slug (e.g. "employee_handbook"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, establishing safety profile. Description adds no behavioral context beyond this (e.g., semantic vs keyword matching, ranking behavior, what constitutes a result). With annotations covering safety, this meets baseline but adds no supplementary behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence states purpose; second provides essential prerequisite. Front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate completeness given existence of output schema (not shown but indicated). Covers the critical workflow dependency (discover_collections). Missing only edge-case handling (e.g., behavior when collection_name not found) to be perfect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all 5 parameters (query, collection_name, top_k, filters, custom_tags) comprehensively. Description mentions 'by name' which aligns with collection_name, but primarily relies on schema. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Search) and resource (knowledge base/collection) clearly. Implies specificity with 'by name' but does not explicitly distinguish from sibling tool 'search' (e.g., when to use specific collection search vs general search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit prerequisite: 'Use discover_collections() first to find available collection names.' Provides clear workflow guidance. Would be perfect if it also clarified when to use this versus the 'search' sibling (e.g., scoped vs global search).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.