B.O.N.S.A.I. Health Intelligence API
Server Details
Lifestyle medicine health intelligence via x402 micropayments. 637+ peer-reviewed references.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- rabyavalla/bonsai-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 9 of 9 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes, but there is overlap between lifestyle_query and query_nutrition_topic, and between check_interactions and drug_safety, though descriptions help differentiate by scope.
Mixed naming conventions: some verbs (check, get, interpret, query) and some nouns (drug_safety, marker_reference). The 'glp_' prefix is consistent but not used elsewhere.
9 tools is well-scoped for the health intelligence domain, covering drug interactions, protocols, lab interpretation, and general queries without being excessive.
Covers major areas comprehensively (interactions, protocols, labs, queries), though the two query tools could be merged, and a tool for patient-specific recommendations might be missing.
Available Tools
9 toolscheck_interactionsARead-onlyInspect
Check a list of medications for food-drug interactions, nutrient depletions, and timing-critical dosing requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| medications | Yes | ||
| foods_or_supplements | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds value by specifying the types of checks performed (food-drug interactions, nutrient depletions, timing-critical dosing), which goes beyond the annotation and clarifies the tool's read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the core purpose. Every word adds value, and there is no fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 parameters, no output schema, and read-only status, the description is adequate but not complete. It fails to explain the optional parameter and return behavior, but it covers the main purpose sufficiently for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate but only vaguely mentions a 'list of medications'. It does not explain the 'medications' array parameter, nor the optional 'foods_or_supplements' parameter. No parameter semantics are added beyond what the schema implies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it checks medications for food-drug interactions, nutrient depletions, and timing-critical dosing. The verb 'Check' and resource 'list of medications' are specific, distinguishing it from siblings like 'drug_safety' and 'query_nutrition_topic'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'drug_safety'. It does not mention exclusions, prerequisites, or scenarios where this tool is preferred, leaving the agent without decision-making context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
drug_safetyBRead-onlyInspect
Return the food-drug safety profile for a single medication.
| Name | Required | Description | Default |
|---|---|---|---|
| medication | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only (readOnlyHint=true) and closed-world (openWorldHint=false). The description adds that it returns a 'food-drug safety profile' but does not elaborate on the profile's structure, error handling, or behavior for unknown medications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no extraneous words. Every word is necessary to convey the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple (1 parameter, no output schema), and the description covers the basic function. However, it lacks detail on what a 'food-drug safety profile' includes, and without an output schema, the agent might not understand the return format. A more complete description would add an example or mention of the profile's contents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must clarify the parameter. It only says 'for a single medication' but does not indicate format, allowed values, or examples. This adds minimal value beyond the schema's type string.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'food-drug safety profile' for a single medication, which precisely identifies the action and the resource. It distinguishes itself from siblings like 'check_interactions' (likely multi-drug) and 'query_nutrition_topic'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, limitations, or suggest when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_protocolARead-onlyInspect
Generate an evidence-based whole-food plant-based protocol for one of 47 chronic conditions. Returns therapeutic foods, daily meal structure, foods to minimize, monitoring markers, and clinical citations.
| Name | Required | Description | Default |
|---|---|---|---|
| condition | Yes | ||
| user_context | No | ||
| duration_weeks | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description's claim of 'generate' (interpreted as deriving a response without side effects) is consistent. The description adds value by specifying the output components, though it omits details on error handling or rate limits. With annotations, the bar is lower, and this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines action and scope, second lists output. It is efficient and front-loaded, though the mention of '47 conditions' is redundant with the schema's enum. No fluff, but could be slightly tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose and output content adequately, but lacks explanations for user_context and duration_weeks, and does not describe the response structure beyond categories. Without an output schema, more detail would be helpful. Overall, adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the 'condition' parameter by mentioning '47 chronic conditions', but does not explain 'user_context' (a nested object) or 'duration_weeks' (integer with default). More than half the parameters lack guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates an evidence-based whole-food plant-based protocol for one of 47 chronic conditions, listing output specifics (therapeutic foods, daily meal structure, etc.). This precisely identifies the action and resource, distinguishing it from sibling tools like 'glp_protocol' or 'lifestyle_query'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives or when not to use it. No mention of prerequisites, context, or exclusions. Sibling tools exist but no comparative advice is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
glp_phase_guideARead-onlyInspect
Return the protocol overview for a specific GLP-1 therapy phase (starting, titrating, maintenance, tapering, post_drug).
| Name | Required | Description | Default |
|---|---|---|---|
| glp1_phase | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, so the description's 'Return' is consistent. The description adds context by naming the phases, but does not disclose additional behavioral traits like response format, error handling, or rate limits. Given the annotation coverage, the description adds modest value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 18 words, front-loaded with the purpose. No unnecessary words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple: one parameter, no output schema. The description fully explains the tool's function and the parameter's role. No additional information is needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists the enum values (starting, titrating, maintenance, tapering, post_drug), adding meaning beyond the raw schema. However, it does not explain what each phase entails, which could be helpful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Return' and the resource 'protocol overview' for a specific GLP-1 therapy phase, listing all five phases. This distinguishes it from sibling tools like 'get_protocol' or 'glp_protocol', which likely return broader or different information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use the tool: when a protocol overview for a specific phase is needed. However, it does not explicitly state when not to use it or provide alternatives from the sibling list (e.g., 'get_protocol', 'glp_protocol'). The phase enumeration in the schema helps guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
glp_protocolBRead-onlyInspect
Generate a GLP-aware nutrition protocol composed on top of any active chronic condition. Returns protein floor (1.2-1.6 g/kg), fiber ramp schedule, GI tolerance interventions, resistance training prescription, hydration target, and phase-specific guidance.
| Name | Required | Description | Default |
|---|---|---|---|
| condition | No | ||
| glp1_phase | Yes | ||
| user_context | No | ||
| glp1_medication | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only behavior. The description adds output details but does not disclose additional behavioral traits like authentication needs or rate limits. Since annotations cover safety, a mid-score is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the core purpose and lists specific outputs. Every word is necessary; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 4 parameters including a nested object and no output schema, the description omits critical details like how condition and user_context affect the output, or phase-specific guidance. The tool is underspecified for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description does not explain any parameter meaning. The 4 parameters (including condition and user_context) are left undocumented, failing to compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a GLP-aware nutrition protocol and lists specific outputs. However, it does not explicitly differentiate from sibling tools like 'get_protocol' or 'glp_phase_guide', missing the top tier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a GLP-aware protocol based on chronic conditions is needed, but provides no explicit when-to-use, when-not-to-use, or alternative tools. The guidance is adequate but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
interpret_labsARead-onlyInspect
Interpret a panel of lab values against ACLM-optimized reference ranges. Returns risk classification per marker, lifestyle interventions, and medication deprescription signals.
| Name | Required | Description | Default |
|---|---|---|---|
| lab_values | Yes | Key-value pairs of biomarker names and values. Common keys: hba1c, fasting_glucose, fasting_insulin, ldl, hdl, triglycerides, apob, lp_a, hscrp, vitamin_d, b12, ferritin, tsh, free_t4, free_t3. | |
| health_goals | No | ||
| current_medications | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, making the tool's safe nature clear. The description adds value by detailing the output structure (risk per marker, interventions, deprescription signals) and the reference range source (ACLM-optimized). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core action and outcomes. Every word adds value; no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (3 params, nested objects) and absence of an output schema, the description provides a solid high-level understanding. It covers inputs (lab panel) and outputs (risk, interventions, deprescription), but could elaborate on how optional inputs influence interpretation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (33%) and the description does not compensate. It does not explain the purpose or format of 'health_goals' or 'current_medications', leaving a gap that the schema alone cannot fill. The lab_values parameter is superficially described in the schema but not elaborated in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool interprets lab values against ACLM-optimized reference ranges and returns specific outputs (risk classification, lifestyle interventions, deprescription signals). It distinguishes itself from siblings like 'marker_reference' which likely only returns reference ranges.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for lab interpretation but lacks explicit guidance on when to use this tool versus siblings like 'check_interactions' or 'drug_safety'. No 'when not to use' or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lifestyle_queryARead-onlyInspect
Ask any lifestyle medicine question. Returns evidence-based answer with citations from a 200-chunk knowledge base spanning ACLM 6-pillars, B.O.N.S.A.I. nutrition, drug-food interactions, lab interpretation, GLP guidance, wearables, and CGM signal interpretation.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| pillar | No | any | |
| max_results | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm readOnlyHint=true (safe read) and openWorldHint=false (not exhaustive). The description adds context: knowledge base size (200 chunks), citation behavior, and coverage of specific topics, going beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose, output, and knowledge scope. No filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return format (answer with citations). It covers the domain comprehensively but omits details about parameter usage like max_results and pillar filtering, which are partially inferable from schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so description must compensate. It partially describes the 'pillar' enum by listing example values in the topic list, but does not explain 'max_results' or 'query' beyond their names. This leaves some ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers lifestyle medicine questions and returns evidence-based answers with citations. It lists specific knowledge domains (ACLM 6-pillars, B.O.N.S.A.I., drug-food interactions, etc.), distinguishing it from specialized siblings like drug_safety or interpret_labs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies broad usage but does not explicitly exclude use cases that belong to sibling tools. However, the listed domains naturally guide users to this tool for general lifestyle queries rather than specialized ones.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
marker_referenceARead-onlyInspect
Look up the ACLM-optimized reference range and lifestyle intervention plan for a single biomarker (e.g., apob, lp_a, hscrp, hba1c, fasting_insulin, vitamin_d).
| Name | Required | Description | Default |
|---|---|---|---|
| marker | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds that the tool is ACLM-optimized and returns both a reference range and a lifestyle plan, which goes beyond the annotation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently conveys the tool's action and key details without unnecessary words. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose and valid input examples. Given the absence of an output schema, it could briefly mention the response structure, but for a simple lookup tool, the current level is sufficient and clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one string parameter 'marker' with 0% description coverage. The description compensates by providing concrete examples of valid biomarkers and clarifying it accepts a single biomarker, adding meaning that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to look up an ACLM-optimized reference range and lifestyle intervention plan for a single biomarker. It provides specific examples (e.g., apob, lp_a) which disambiguates from sibling tools like interpret_labs or lifestyle_query.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving reference ranges and lifestyle plans for one biomarker. While it doesn't explicitly state when not to use or name alternatives, the narrow purpose and examples make the context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_nutrition_topicBRead-onlyInspect
Ask a nutrition or food-as-medicine question. Returns evidence-based answer from the 200-chunk lifestyle medicine knowledge base.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| max_results | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description's mention of 'evidence-based answer from the 200-chunk knowledge base' adds limited context about scope. No contradictions, but little additional behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that is front-loaded with the action. Every word is necessary and contributes to clarity without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given two parameters and no output schema, the description lacks details on return format, behavior of `max_results`, or response structure. It is insufficient for complete understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain either parameter (`query` or `max_results`). It adds no meaning beyond the schema, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool is for asking nutrition or food-as-medicine questions and returns evidence-based answers from a specific knowledge base. It clearly distinguishes from sibling tools like 'lifestyle_query' by specifying the domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for nutrition-related queries but does not provide explicit when-not-to-use or alternative tools. The context is clear but lacks exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!