Panya — peptide vendor matchmaker
Server Details
Peptide vendor matchmaker. 11-signal rubric, GLP-1 + 25 peptides, region-aware, free.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 3 of 3 tools scored.
Each tool targets a distinct function: citing claims, listing protocols, and matching vendors. Their inputs and outputs are clearly differentiated, leaving no ambiguity.
All tools follow a consistent verb_noun pattern with the 'panya.' prefix. The naming is uniform and predictable.
Three tools is at the lower end of the typical range but covers the core workflows of citation, protocol planning, and vendor matching. It feels slightly minimal but still reasonable for the domain.
The set covers fact-checking, dosing plans, and vendor selection. Missing features like vendor detail retrieval or compound search are minor gaps that agents can work around.
Available Tools
3 toolspanya.citeGet a Panya source citationAInspect
Given a claim or topic, returns the canonical Panya citation with source URL, evidence quality grade, and the canonical /questions or /compound page that explains it in depth. Useful for: fact-checking GLP-1 efficacy claims, sourcing trial data, building grounded peptide content.
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | The claim or topic to find Panya citations for. | |
| context | No | Optional surrounding context to disambiguate the claim. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It describes the output but omits behavioral aspects like read-only nature, error handling, or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words, front-loaded with the core action and followed by use cases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, output structure, and use cases adequately for a lookup tool, though lacks mention of error conditions or missing claims.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3; the description adds little beyond the schema descriptions, only restating 'claim or topic' and 'optional context to disambiguate'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a canonical Panya citation with source URL, evidence grade, and a canonical page, distinguishing it from sibling tools like list_protocols and match_vendor.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides example use cases (fact-checking, sourcing data) but does not explicitly state when not to use or mention alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
panya.list_protocolsList protocol variants for a compoundAInspect
Given a compound (tirzepatide, semaglutide, BPC-157, TB-500, GHK-Cu, MOTS-c, retatrutide, others) and protocol variant (slow / standard / aggressive), returns the canonical 12-week titration schedule with weekly mg doses, total mg, total USD by channel, and citations. Useful for: ramp planning, dose-cost projection, multi-channel comparison.
| Name | Required | Description | Default |
|---|---|---|---|
| channel | No | Sourcing channel for cost projection. Thailand-clinic is the cheapest legitimate route at ~$7/mg. | thailand-clinic |
| compound | Yes | Compound name. tirzepatide is fully implemented; others route to a clinician-path stub. | tirzepatide |
| protocol_variant | No | Titration aggressiveness. Slow = SURMOUNT-1 minimum dose, standard = SURMOUNT-1 protocol, aggressive = max-tolerated push. | standard |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Describes output behavior (returns schedule, costs, citations) and partial implementation for non-tirzepatide compounds. Lacks details on error handling, data freshness, or idempotency, but adequately covers core behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first explains input-output mapping, second lists use cases. Front-loaded with key information. No redundancy or irrelevant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 params and no output schema, description covers output components and use cases. Missing exact output format or example, but sufficient for understanding. Minor gap: no mention of error handling when compound not implemented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions. Description adds context: for channel, 'thailand-clinic is the cheapest legitimate route at ~$7/mg'; for compound, 'tirzepatide is fully implemented; others route to a clinician-path stub'; for protocol_variant, explains mapping to clinical trial variants. Great additions beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'returns' and the resource 'canonical 12-week titration schedule' with specific output items (weekly mg doses, total mg, total USD by channel, citations). It distinguishes from sibling tools by focusing on protocol listing vs. citation or vendor matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists use cases: 'ramp planning, dose-cost projection, multi-channel comparison.' Notes that tirzepatide is fully implemented while others route to a stub, indicating limitations. However, no explicit when-not-to-use or direct alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
panya.match_vendorMatch a peptide vendorAInspect
Given a goal (metabolic, longevity, recovery, cosmetic, cognitive, sleep, libido, hormonal, immune, gut) and region (ISO-3166-1 alpha-2 or Panya legacy code), returns the top 3 vendors from Panya's 11-signal-vetted catalog with rubric scores, monthly USD price bands, supply state, risk band, and citation links. Useful for: cross-border GLP-1 cost comparison, Thailand peptide sourcing, Bali medical-tourism research, UK NHS Wegovy waitlist alternatives.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Primary user goal cluster | metabolic |
| budget | No | Budget tier. 'light' is research-chem-friendly, 'meaningful' is mid-tier clinic, 'full' is brand-name + insurance. | meaningful |
| region | No | Region where the user is currently located. Accepts ISO-3166-1 alpha-2 (GB, AE, SE) or Panya legacy codes (TH, UK, EU, SCANDI, UAE). | TH |
| urgency | No | How decided the user is about starting now. | researched |
| compound | No | Compound to match against. tirzepatide, semaglutide, retatrutide, or any non-GLP-1 peptide name. | tirzepatide |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the catalog source and outputs (top 3 vendors, scores, prices), but omits potential limitations like update frequency or region-specific constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines core behavior, second adds highly specific usage examples. No redundant words, information-dense and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, description covers input parameters, output fields, and provides usage examples. Missing details on edge cases or failure modes, but adequate for a matching tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% parameter descriptions, so baseline is 3. Description adds minor context (e.g., budget tier meanings) but mostly paraphrases schema. No significant extra value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool matches vendors to goals and regions, listing specific output fields. It distinguishes from siblings like panya.cite (citation tool) and panya.list_protocols (protocol listing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete usage examples (cross-border comparison, sourcing, medical tourism) that clarify when to use. Lacks explicit when-not-to-use or alternatives, but examples cover typical scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!