Skip to main content
Glama

Server Details

Biotech rNPV/PoS engine for AI agents. Signed exports, evidence register, asset landscape.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct data type (dossier, evidence, methodology, project, scenario, benchmarks, landscape, export) or action (list, verify). No two tools have overlapping purposes; an agent can reliably select the correct tool based on the desired information.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: get, list, query, verify. The verbs are descriptive and the nouns clearly indicate the resource, making the API predictable and easy to navigate.

Tool Count5/5

With 9 tools, the surface is appropriately sized for a read-only data access API. Each tool serves a specific need without redundancy or bloat, and the count aligns well with the domain's complexity.

Completeness4/5

The tools cover the main read operations for scenarios, projects, evidence, and verification. A minor gap is the lack of a list_projects tool, which would allow discovering available project IDs. However, the core workflows are covered, and the missing tool is a small oversight.

Available Tools

9 tools
get_dossierA
Read-onlyIdempotent
Inspect

Fetch the structured dossier JSON for a scenario — the same content rendered in the IC Dossier PDF/Excel exports, in machine-readable form. Includes verdict, drivers, assumptions, evidence summary, and (if available) signed-export hashes pointing to the latest signed artifacts.

ParametersJSON Schema
NameRequiredDescriptionDefault
scenario_idYesUUID of the scenario to fetch the IC dossier payload for.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesStructured dossier payload mirroring the IC Dossier PDF/Excel exports. Top-level keys include verdict, drivers, assumptions, evidence, and (when available) signed_export hashes. See get_methodology('evidence-standards') for the full structure.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false. Description adds value by detailing the returned data (verdict, drivers, assumptions, evidence summary, signed-export hashes) beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states core purpose, second enriches with content details. No redundant information; every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with one parameter and an output schema, the description adequately lists what the dossier contains. Minor gap: no mention of behavior when scenario_id is invalid or missing, but overall complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the single parameter. Description does not add new meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it fetches structured dossier JSON for a scenario, listing specific contents (verdict, drivers, etc.) and links to PDF/Excel exports. It distinguishes from sibling tools like get_evidence or get_scenario by specifying that it retrieves the full dossier in JSON form.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this vs alternatives like get_evidence or get_scenario. The context implies it's for dossier data, but no when-not or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_evidenceA
Read-onlyIdempotent
Inspect

Fetch the evidence register entries for a scenario — the citations, sources, and supporting documents the analyst attached to back the assumptions. Use this when an analysis question references 'what's the evidence for X?'

ParametersJSON Schema
NameRequiredDescriptionDefault
scenario_idYesUUID of the scenario whose evidence register entries to fetch.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesArray of evidence-register entries. Each entry has fields like assumption_path, source_url, source_date, excerpt, freshness, confidence — see /methodology/evidence-standards for the canonical schema.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. The description only adds that evidence includes citations, sources, etc., which is content-oriented rather than behavioral. No additional behavioral context (e.g., authentication, rate limits, or return details) is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant words. The purpose is stated first, followed by a usage example. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool (1 param, no nested objects), annotations cover safety, and output schema exists, the description explains what the evidence entries contain. Lacks mention of potential pagination or ordering, but overall complete for this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of scenario_id. The tool description adds context about what evidence is, but does not significantly enhance parameter meaning beyond the schema. Baseline 3 with slight improvement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (fetch) and resource (evidence register entries for a scenario). Provides concrete examples of content (citations, sources, documents). Does not explicitly differentiate from sibling tools like get_dossier or get_methodology, but the description is specific enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Directly tells when to use the tool ('when an analysis question references "what's the evidence for X?"'). Does not mention when not to use or list alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_methodologyA
Read-onlyIdempotent
Inspect

Fetch a PhaseFolio methodology section's full content (backtest, PoS calibration, IRA framework, evidence standards, network benchmarks). Designed for citation: each section has a stable URL and a methodology version identifier you can reference in analysis output.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionYesWhich methodology section to fetch. 'backtest' covers the validation methodology and AUC/CI numbers; 'pos-calibration' covers the indication × modality × biomarker matrix and multipliers; 'ira-framework' covers the Year-9/Year-13 MFP cliff modeling; 'evidence-standards' covers source tiers and freshness rules; 'network-benchmarks' covers anonymization and cohort cuts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint. The description adds context about stable URL and version identifier, which are useful for understanding mutability and reusability. However, no additional behavioral traits beyond what annotations and schema provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and then usage context. Every sentence adds value with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive annotations, the description covers the key aspects (action, resource, unique feature for citation). No gaps identified for this simple single-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with detailed enum descriptions. The tool description adds context about full content and citation, but does not significantly enhance understanding of the parameter beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetch') and resource ('PhaseFolio methodology section's full content'), and distinguishes from siblings by emphasizing the citation-friendly design with stable URL and version identifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (for citation or incorporating methodology details into analysis) but does not explicitly mention when not to use or list alternatives. The context from sibling tool names provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_projectA
Read-onlyIdempotent
Inspect

Fetch a PhaseFolio project's metadata: indication, sub-indication, modality, biomarker, asset name, and stage at entry. Use this when you need to ground an analysis question in the project's clinical context before fetching scenarios.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesUUID of the PhaseFolio project. Get from the project URL in the dashboard or from a prior list_scenarios call.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false, so the description adds little behavioral context beyond stating it fetches metadata. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded key information. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, output schema exists, annotations complete), the description provides sufficient purpose and usage context. Could have mentioned output schema details but is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for project_id, so the description does not add new parameter meaning. It lists output fields but that is output semantics, not parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Fetch a PhaseFolio project's metadata') and lists specific fields returned (indication, sub-indication, etc.). It distinguishes from sibling 'get_*' tools by targeting project metadata specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to use this tool: 'Use this when you need to ground an analysis question in the project's clinical context before fetching scenarios.' This provides clear context and hints at workflow ordering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scenarioA
Read-onlyIdempotent
Inspect

Fetch a PhaseFolio scenario's complete inputs (stage costs, durations, PoS, commercial assumptions, IRA settings) and computed outputs (eNPV, rNPV, cumulative PoS, per-stage breakdown, top sensitivity drivers). This is the authoritative engine output — every signed export ties back to this payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
scenario_idYesUUID of the scenario. Get from list_scenarios or the scenario URL in the dashboard.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds meaningful context beyond the annotations: it specifies that the tool returns both inputs (stage costs, durations, etc.) and computed outputs (eNPV, rNPV, etc.), and declares it as the authoritative source for signed exports. This explains the behavioral significance of fetch, complementing the readOnlyHint and idempotentHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences deliver all necessary information: what is fetched (inputs and outputs) and why it matters (authoritative engine output). No extraneous words, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With a single parameter, an output schema present, and comprehensive annotations, the description is complete. It clarifies the tool's role among siblings (authoritative source for exports) and covers all essential aspects without omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the only parameter (scenario_id) completely, with format and description. The tool description does not add any further semantics about the parameter beyond what is in the schema, which is acceptable given 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch') and the resource ('PhaseFolio scenario') along with the inputs and outputs. It conveys the tool's role as the authoritative engine output, but does not explicitly differentiate it from sibling tools like list_scenarios or get_dossier, which might also return scenario-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings (e.g., list_scenarios for listing, verify_export for export verification). It does not mention prerequisites, limitations, or use cases, leaving the agent to infer appropriateness without context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_scenariosA
Read-onlyIdempotent
Inspect

List all scenarios in a PhaseFolio project. Returns scenario IDs, names, created/updated timestamps, and top-line eNPV/rNPV. Use this to discover what scenarios exist before fetching one in detail.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitYesMaximum number of scenarios to return (1–200). Defaults to 50; raise for projects with many what-if branches.
project_idYesUUID of the parent project whose scenarios to list.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesScenarios in the project, newest first.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, destructiveHint=false. The description adds useful context about the return content (scenario IDs, names, timestamps, eNPV/rNPV), which goes beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that efficiently convey purpose, return data, and usage guidance. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter set (2 required, fully described), presence of output schema, and sufficient annotations, the description fully covers what an agent needs to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, with both parameters (project_id, limit) fully described. The description adds no additional parameter information beyond the schema; baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List all scenarios in a PhaseFolio project', specifies returned data (IDs, names, timestamps, eNPV/rNPV), and distinguishes from the sibling 'get_scenario' by indicating its use for discovery before fetching details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises 'Use this to discover what scenarios exist before fetching one in detail', providing clear context for when to use this tool versus the sibling 'get_scenario'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_benchmarksA
Read-onlyIdempotent
Inspect

Query PhaseFolio's network benchmarks — anonymized aggregate statistics across the network of scenarios (PoS by indication × modality, cost distributions, duration percentiles). L1 tier (no auth) returns lagged, low-granularity headlines; L3 tier (bearer auth) returns granular slices with biomarker stratification.

ParametersJSON Schema
NameRequiredDescriptionDefault
metricYesWhich benchmark family to return: 'pos' (probability of success), 'cost' (per-stage cost distributions), 'duration' (per-stage duration percentiles), or 'all'.
modalityNoOptional modality filter ('Antibody', 'Small Molecule', 'Cell Therapy', 'Gene Therapy', etc.). Omit to return cross-modality stats.
indicationYesIndication name (e.g. 'NSCLC', 'Rheumatoid Arthritis', 'Atopic Dermatitis'). Free-form; the engine fuzz-matches to the canonical indication taxonomy.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesL1 (no-auth) returns a versioned headline set; L3 (bearer) will return granular slices once enough orgs are on the network.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations (readOnlyHint, idempotentHint), including access tier behavior, data lag, granularity differences, and biomarker stratification for L3. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence defines the core function, the second adds crucial access-tier behavior. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (rendering return value description unnecessary), the description covers purpose, parameters, behavioral nuances, and authorization. For a tool with 3 parameters and good annotations, it is fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds value by explaining free-form indication fuzz-matching and tying parameters to output differences across tiers (e.g., biomarker stratification for L3). This enriches parameter understanding beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries PhaseFolio's network benchmarks and specifies the type of data returned (anonymized aggregate statistics, PoS by indication × modality, cost distributions, duration percentiles). It distinctly separates this from sibling tools (get_dossier, get_evidence, etc.) and includes access tier differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use L1 (no auth, low-granularity) vs L3 (bearer auth, granular slices with biomarker stratification). Though it doesn't explicitly list when not to use, the context is sufficient for the agent to choose based on authorization level and data needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_landscapeA
Read-onlyIdempotent
Inspect

Fetch the asset-anchored competitive landscape for a project — comparable trials, competing programs, sponsor activity, and biomarker overlap. Sourced from CT.gov + FDA + PhaseFolio's curated enrichment. Use when an analysis question asks 'who else is working on this?' or 'what's the competitive context?'

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesUUID of the project to anchor the landscape on. Indication × modality × biomarker filters derive from the project.
phase_filterYesOptional list of clinical phases to include (e.g. ['PHASE2','PHASE3']). Empty array means all phases.
recency_yearsYesHow many years back to look (1–50). Defaults to 10 — broad enough to catch development-stage peers without polluting with stale failures.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYesAsset-anchored landscape payload: kpis (sponsor counts, trial counts, etc.), comparable_trials[] (NCT-keyed), competing_programs[], biomarker_overlap. Filtered by the project's indication × modality × biomarker.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint false. The description adds value by specifying data sources (CT.gov, FDA, PhaseFolio enrichment), which provides behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences that front-load the main purpose and include the intended use case. Every word serves a purpose; no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (though not shown), the description combined with annotations provides sufficient context for a fetch operation. Could mention output structure but not necessary with schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add additional parameter-level details beyond what the schema already provides. It is adequate but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches 'asset-anchored competitive landscape' and lists specific content types (comparable trials, competing programs, sponsor activity, biomarker overlap). This effectively differentiates it from siblings like query_benchmarks or get_project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'who else is working on this?' or 'what's the competitive context?'. While it lacks explicit when-not-to-use guidance, the context is clear enough for an agent to infer appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_exportA
Read-onlyIdempotent
Inspect

Verify a signed PhaseFolio export. Accepts either a content hash (from a signed PDF/Excel) or a URL to a hosted artifact. Returns verification status (valid/tampered/unknown), issued timestamp, methodology version, and an anonymized originating-org identifier. Use this when a user shares a PhaseFolio dossier and you want to confirm it's authentic before citing the analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashNo64-character hex SHA-256 of the signed artifact's content. Embedded in every signed PhaseFolio PDF (footer) and Excel (cover tab).
artifact_urlNoPublic URL of the signed artifact. The engine fetches it, computes the hash, and verifies — slower but works when only the URL is known.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety and idempotency. The description adds behavioral details beyond annotations: it returns 'verification status, issued timestamp, methodology version, and anonymized originating-org identifier.' This enriches the agent's understanding of output behavior. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, with no wasted words. The first sentence states the action and inputs, the second provides usage guidance and outlines return value. Information is front-loaded and efficiently structured. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 optional parameters, no enums, no nested objects), the description is complete. It covers purpose, inputs, output fields, and usage context. The output schema exists, so return value details are not required. No gaps for an agent to misinterpret.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage: both 'hash' and 'artifact_url' are described. The description reiterates these options but adds nuance ('from a signed PDF/Excel' for hash, 'hosted artifact' for URL). The additional context is helpful but not essential beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Verify a signed PhaseFolio export.' It specifies the inputs (hash or URL) and the context of use. This distinguishes it from sibling tools like get_dossier which retrieve data, not verify authenticity. The verb 'verify' is specific and the resource is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use: 'Use this when a user shares a PhaseFolio dossier and you want to confirm it's authentic before citing the analysis.' This provides clear context. However, it does not offer explicit when-not or alternative tools, though the sibling list implies distinct roles. This meets the 'clear context, no exclusions' level.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources