Skip to main content
Glama

store_lab_values

Store and manage cancer patient lab values from documents for trend tracking, with deduplication checks and standardized parameter handling.

Instructions

Store parsed lab values from a document for trend tracking.

Includes deduplication checks:

  • If document_id already has stored values, returns skipped (unless force=True to update/replace).

  • If another document has values for the same lab_date, warns about collision (stores anyway but flags it).

Standardized parameter names: WBC, ABS_NEUT, ABS_LYMPH, PLT, HGB, ANC, ALT, AST, GMT, ALP, BILIRUBIN, CREATININE, eGFR, CEA, CA19_9, SII, NE_LY_RATIO

Args: document_id: Source document ID (should be a labs document). lab_date: Date of the lab test (YYYY-MM-DD). values: JSON array of objects, each with: parameter, value, unit, and optionally reference_low, reference_high, flag. Example: [{"parameter": "WBC", "value": 6.8, "unit": "10^9/L", "reference_low": 4.0, "reference_high": 10.0, "flag": ""}] force: If True, store even if document already has values (replaces existing via INSERT OR REPLACE).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
document_idYes
lab_dateYes
valuesYes
forceNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so well by disclosing key behavioral traits: deduplication logic (skipping if document_id exists, warning on lab_date collisions), mutation effects (stores or updates data), and the impact of the 'force' parameter. It doesn't cover rate limits or auth needs, but given the context, this is sufficient for a high score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with a clear purpose statement, usage details, and parameter explanations. It could be slightly more front-loaded by moving the standardized parameter names list to an appendix, but overall, every sentence earns its place with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with deduplication logic), no annotations, and an output schema present (which handles return values), the description is complete. It covers purpose, usage, behavior, and parameter semantics thoroughly, leaving no gaps for the agent to understand and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose (e.g., 'document_id: Source document ID (should be a labs document)'), provides an example for 'values', and clarifies the effect of 'force'. This fully compensates for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('store') and resource ('parsed lab values from a document'), and distinguishes it from siblings by focusing on lab value storage for trend tracking, unlike tools like 'analyze_labs' or 'compare_labs' which process or compare data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance, including when to use (for storing lab values with deduplication checks) and when not to use (unless 'force=True' to update/replace). It also implies alternatives by noting that if 'document_id already has stored values, returns skipped', suggesting tools like 'get_lab_summary' might be used to check existing data first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/peter-fusek/oncofiles'

If you have feedback or need assistance with the MCP directory API, please join our Discord server