MidOS - MCP Community Library
Server Details
Curated knowledge API for AI agents - skill packs, semantic search, validated patterns.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- MidOSresearch/midos
- GitHub Stars
- 5
- Server Listing
- MidOS Research Protocol
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsaskARead-onlyIdempotentInspect
Ask a question and get a synthesized answer from the knowledge base.
Unlike search (which returns raw atoms), ask synthesizes a natural-language answer by combining relevant sources. Use when you need an explanation, not just matching documents.
Args: question: Your question (e.g., "How do I implement caching in FastAPI?")
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context not in annotations: it explains that the tool 'synthesizes' answers by 'combining relevant sources', revealing the internal mechanism. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three distinct components: the core action, the sibling comparison, and the parameter documentation. Every sentence serves a purpose; there is no redundancy or filler. The information is front-loaded with the essential function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one simple parameter and an output schema exists (so return values need not be described), the description is complete. It covers purpose, behavioral differences from siblings, and parameter semantics, providing everything an agent needs to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates adequately by providing semantic meaning ('Your question') and a concrete example ('How do I implement caching in FastAPI?'). While it doesn't specify constraints like max length or format, the example provides sufficient guidance for a single string parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'synthesizes a natural-language answer' from the knowledge base, using specific verbs and resources. It explicitly distinguishes itself from the sibling 'search' tool by contrasting 'synthesized answer' with 'raw atoms', ensuring the agent understands the unique value proposition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use: 'Use when you need an explanation, not just matching documents.' This clearly delineates the boundary between this tool and the 'search' sibling, telling the agent exactly which use cases favor 'ask' over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eurekaARead-onlyIdempotentInspect
Get validated EUREKA discoveries — peer-reviewed insights with measured impact.
EUREKA items are the highest-quality knowledge in MidOS: each has passed quality gates, been validated by multiple sources, and includes measured ROI or performance improvements.
Returns: JSON array of EUREKA items with title, impact metrics, and content
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and idempotent operations. The description adds valuable behavioral context beyond annotations: it discloses the return format (JSON array with specific fields: title, impact metrics, content) and explains the data validation process (quality gates, multiple sources). It does not contradict the read-only annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear front-loading ('Get validated EUREKA discoveries'), followed by explanatory context about data quality, and ending with return value documentation. Every sentence serves a distinct purpose: defining the action, explaining the resource quality, and specifying the output format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no input parameters and an output schema exists (per context signals), the description is appropriately complete. It summarizes the return values adequately and explains the data source characteristics. A score of 5 is reserved for additional operational details (pagination, limits) which are absent but may be covered by the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema coverage (empty object). According to the baseline rules, this warrants a score of 4. The description appropriately does not invent parameters, and the absence of filtering is implicitly handled by the phrase 'Get validated EUREKA discoveries' suggesting a full or default set is returned.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'validated EUREKA discoveries' with specific characteristics (peer-reviewed, measured impact). It establishes what EUREKA items are (highest-quality knowledge with ROI metrics), implicitly distinguishing them from general search results or skills. However, it lacks explicit differentiation from sibling tools like 'search' or 'ask'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While not explicitly stating 'use this when...', the description provides clear context for usage by emphasizing the quality gates ('validated by multiple sources', 'measured ROI'). This allows an agent to infer this tool is appropriate when high-certainty, validated insights are needed versus exploratory search. No explicit alternatives or exclusions are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_skillsARead-onlyIdempotentInspect
Get reusable skills (step-by-step procedures) for specific technologies.
Skills are validated, executable guides covering common tasks like deployment, testing, migration, and configuration. Filter by technology stack to find relevant skills.
Args: stack: Filter by technology (e.g. 'python', 'fastapi', 'react', 'kubernetes')
Returns: JSON array of skills with name, description, and stack compatibility
| Name | Required | Description | Default |
|---|---|---|---|
| stack | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds valuable context that skills are 'validated, executable' rather than just documentation, and clarifies the return format ('JSON array of skills with name, description, and stack compatibility'), which helps the agent understand the data structure without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear Args and Returns sections. It front-loads the core purpose and uses bulleted-style formatting effectively. It could be slightly more concise—the list of task types (deployment, testing, migration, configuration) is helpful but could be summarized as 'common DevOps tasks'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter retrieval tool with existing output schema and strong annotations, the description is complete. It covers the resource definition, parameter usage, return format, and content nature without unnecessary verbosity. No gaps remain for an agent to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing clear semantics for the 'stack' parameter ('Filter by technology') and concrete examples ('python', 'fastapi', 'react', 'kubernetes'). The Args section effectively documents the parameter that the schema leaves undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'reusable skills (step-by-step procedures)' with specific scope (deployment, testing, migration, configuration). However, it doesn't explicitly differentiate from the sibling 'search' tool, which could create ambiguity about when to use general search versus this specialized retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use the filter ('Filter by technology stack') and what content to expect ('validated, executable guides'). However, it lacks explicit guidance on when to choose this over siblings like 'search' or 'ask', or prerequisites for using specific stack values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchARead-onlyIdempotentInspect
Search MidOS knowledge base for relevant information.
Use this as your FIRST tool to discover what knowledge is available. Returns ranked results with titles, snippets, and quality scores.
Args: query: Search query (keywords or topic) limit: Max results (1-20, default 5) domain: Filter by domain (engineering, security, architecture, devops, ai_ml)
Returns: JSON array of matching atoms with title, snippet, score, and source
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| domain | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety, while the description adds valuable output context: 'ranked results with titles, snippets, and quality scores' and 'JSON array of matching atoms.' Documents return structure beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Docstring-style structure with clear sections (purpose, usage, Args, Returns). Front-loaded with immediate purpose statement. Every section earns its place—Args compensates for schema gaps, Returns previews output format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage for a 3-param search tool: purpose, workflow guidance, full parameter documentation (despite empty schema), and return value description. Appropriate given existence of output schema does not reduce the value of the textual return description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical compensation for 0% schema description coverage. Documents all 3 parameters with constraints: query semantics ('keywords or topic'), limit range ('1-20, default 5'), and domain filter with enumerated values ('engineering, security, architecture, devops, ai_ml').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb ('Search') + specific resource ('MidOS knowledge base') + scope ('relevant information'). Clearly distinguishes from siblings like 'ask' (conversational) and 'get_*' (direct retrieval) by positioning it as a discovery/search operation over a knowledge base.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this as your FIRST tool to discover what knowledge is available,' providing clear workflow sequencing. However, lacks explicit 'when not to use' guidance or named alternatives (e.g., when to use 'ask' instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
statsARead-onlyIdempotentInspect
Get MidOS knowledge base statistics and health metrics.
Returns total atom count, breakdown by type and domain, top contributors, and system health indicators. Use to understand the scope of available knowledge.
Returns: JSON with counts by type, domain, contributor rankings, and health status
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint and idempotentHint (safe, repeatable reads). The description adds valuable context about the specific metrics returned (atom count by type/domain, contributor rankings) and output format (JSON), complementing the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose, but contains redundancy: the 'Returns total atom count...' sentence largely duplicates the subsequent 'Returns:' block, which wastes space. Otherwise well-structured with clear sections.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (zero parameters, read-only operation) and the presence of an output schema, the description provides sufficient context. It summarizes the return structure adequately without needing exhaustive field documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Per the rubric, 0 parameters warrants a baseline score of 4. The description appropriately remains silent on parameters since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'MidOS knowledge base statistics and health metrics' with specific details on what data is returned (atom count, breakdowns, contributors). It implicitly distinguishes from siblings like 'ask', 'search', and 'get_*' by focusing on aggregate statistics rather than individual record retrieval, though explicit differentiation is not stated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides functional guidance ('Use to understand the scope of available knowledge'), indicating when the tool is appropriate. However, lacks explicit comparisons to siblings (e.g., 'use this instead of search when you need counts'), prerequisites, or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.