Skip to main content
Glama

Server Details

Hosted MCP for denial, prior auth, reimbursement, workflow validation, batch scoring, and feedback.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
sentinelsignal/sentinel-signal-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. For example, get_limits retrieves quota information, get_usage tracks call usage, list_workflows discovers available workflows, and score_workflow/score_batch handle different scoring methods. The descriptions clearly differentiate their functions, preventing agent misselection.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern (e.g., get_limits, list_workflows, score_batch). The verbs are appropriate and predictable (get, list, score, submit, validate), and snake_case is used uniformly throughout, making the tool set easy to navigate and understand.

Tool Count5/5

With 8 tools, the count is well-scoped for a healthcare scoring API server. Each tool serves a specific and necessary function, from setup (get_limits, list_workflows) to core operations (score_workflow, score_batch) and feedback (submit_feedback), without redundancy or bloat.

Completeness5/5

The tool set provides complete coverage for the healthcare scoring domain. It includes discovery (list_workflows, get_workflow_schema), validation (validate_workflow_payload), scoring (score_workflow, score_batch), usage tracking (get_limits, get_usage), and feedback submission (submit_feedback), ensuring agents can handle the full lifecycle without gaps.

Available Tools

8 tools
get_limitsGet current plan limitsA
Read-onlyIdempotent
Inspect

Retrieve plan limits, monthly quota, remaining trial calls, and upgrade state for the current Sentinel API key.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint=true and idempotentHint=true already declared in annotations, the description efficiently adds the specific data domain (quota, trial calls, upgrade state) that annotations cannot express. It does not disclose error behaviors or rate limits, but given strong annotations, the 'what' coverage (data fields) is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense sentence with zero waste. The verb 'Retrieve' is front-loaded, followed immediately by the four distinct data categories, and scoped to the API key. Every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a zero-parameter read operation. Given the existence of an output schema (per context signals), the description wisely focuses on conceptual scope rather than return value structure. Minor gap: does not mention authentication failure behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Baseline 4 per instructions for 0-parameter tools. The schema correctly reflects the zero-argument nature of this getter operation, and the description appropriately makes no mention of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Retrieve' is a clear verb, and it enumerates the exact resources returned (plan limits, monthly quota, trial calls, upgrade state). The scope 'for the current Sentinel API key' distinguishes it from siblings like get_usage, which implies different data (consumption history vs. capacity limits).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus the sibling get_usage tool, nor any mention of prerequisites (e.g., valid API key required). The phrase 'current Sentinel API key' implies authentication context but does not constitute usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_usageGet monthly usageA
Read-onlyIdempotent
Inspect

Retrieve monthly scoring-call usage for the current Sentinel API key, optionally for a specific YYYY-MM month.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthNoOptional billing month in YYYY-MM format. Defaults to the current month when omitted.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, and openWorldHint=false. Description adds valuable scope context ('current Sentinel API key') and specifies 'scoring-call' domain. Does not disclose rate limits, data retention, or temporal constraints (e.g., future months) beyond the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, well-structured sentence front-loaded with the action verb. Information flows from general (what to retrieve) to specific (optional month filter). No redundant or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple read operation with robust annotations and explicit output schema. Description covers resource, scope, and parameter without needing to detail return structure. Minor gap in explicit sibling differentiation prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single 'month' parameter (including format and default behavior). Description mentions 'YYYY-MM' format and 'optionally', which largely mirrors schema content. Adds minimal semantic value beyond the schema's 'Optional billing month' description, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Retrieve' and resource 'monthly scoring-call usage', scoped to 'current Sentinel API key'. Clearly distinguishes from execution tools (score_batch, score_workflow) and likely from get_limits (usage vs limits), though explicit differentiation from get_limits is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through 'optionally for a specific YYYY-MM month', indicating the parameter is optional. However, lacks explicit when-to-use guidance, when-not-to-use constraints, or comparison to get_limits alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_workflow_schemaFetch workflow schemaA
Read-onlyIdempotent
Inspect

Fetch required fields, optional fields, enums, and an example payload for a Sentinel workflow.

ParametersJSON Schema
NameRequiredDescriptionDefault
workflowYesWorkflow identifier such as healthcare.denial, healthcare.prior_auth, or healthcare.reimbursement.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, covering safety. The description adds valuable behavioral context by detailing exactly what the returned schema contains (field classifications, enums, example payload) beyond what annotations provide. No contradictions with structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with the action verb, followed by specific payload components. Every clause earns its place by conveying distinct return value categories.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of output schema, annotations covering safety/read-only behavior, and comprehensive input schema, the description appropriately focuses on semantic value (what gets returned) rather than redundant structural documentation. Complete for a simple metadata retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'workflow' parameter, the schema itself documents the identifier format and examples. The description mentions 'Sentinel workflow' adding domain context, but largely relies on the schema for parameter semantics, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb (Fetch) and clearly identifies the resource (workflow schema) and components returned (required fields, optional fields, enums, example payload). It effectively distinguishes from siblings like `list_workflows` (listing vs. schema retrieval) and `validate_workflow_payload` (validation vs. schema description).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description clearly states what the tool returns, implying use when schema introspection is needed, it lacks explicit guidance on when to use this vs. `validate_workflow_payload` or prerequisites for calling. The usage context is implied but not stated explicitly (e.g., 'use before submitting to understand field requirements').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_workflowsList supported workflowsA
Read-onlyIdempotent
Inspect

Discover the supported Sentinel workflows and current model versions before scoring.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety. Description adds value by clarifying returned data includes both workflows AND model versions (beyond name), but omits pagination, caching, or scope limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. Zero redundancy: 'Discover' (action), 'workflows/versions' (target), 'before scoring' (context).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for simple discovery tool with output schema present and zero parameters. Describes discovery purpose and usage timing without needing to replicate return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero-parameter tool receives baseline 4. No properties require semantic elaboration beyond empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Discover' with clear resources ('supported Sentinel workflows' and 'current model versions'). The temporal phrase 'before scoring' effectively differentiates from sibling scoring tools (score_workflow, score_batch).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear sequencing guidance ('before scoring') establishing prerequisite relationship to scoring operations. Lacks explicit naming of alternatives or 'when not to use' exclusions, but strongly implies intended workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score_batchScore a workflow batchA
Read-onlyIdempotent
Inspect

Score up to 25 workflow items sequentially in one request for healthcare workflow automation.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYesList of scoring items, each containing workflow, payload, and optional options.
continue_on_errorNoWhen true, later items keep running even if an earlier item fails validation or scoring.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent safety profile. Description adds valuable behavioral constraints not in annotations: the 25-item limit, sequential (not parallel) processing, and domain context. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 12-word sentence with zero waste. Front-loaded with action, constraint (25), resource, and mode. Every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, good annotations, and presence of output schema, description efficiently covers key constraints (batch size, sequential processing) and domain. Minor gap in explicit sibling differentiation prevents 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds critical constraint 'up to 25' (array size limit) and processing mode 'sequentially' not present in parameter descriptions, exceeding baseline significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Score'), resource ('workflow items'), and domain ('healthcare workflow automation'). Mention of 'up to 25' and 'in one request' strongly implies batch processing, distinguishing from likely single-item sibling 'score_workflow', though explicit contrast is missing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage guidance via capacity constraint ('up to 25') and processing mode ('sequentially'), but lacks explicit when-to-use recommendations or mention of alternative 'score_workflow' for single items.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score_workflowScore a Sentinel workflowC
Read-onlyIdempotent
Inspect

Run a structured Sentinel scoring request for a supported healthcare workflow.

ParametersJSON Schema
NameRequiredDescriptionDefault
optionsNoOptional scoring flags such as explanation, thresholds, or runtime options supported by /v1/score.
payloadYesStructured workflow payload matching the Sentinel /v1/score schema for the selected workflow.
workflowYesWorkflow identifier to score. Common values include healthcare.denial, healthcare.prior_auth, and healthcare.reimbursement.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, and openWorldHint=false, establishing the operation as safe and repeatable. The description adds context that this is a 'Sentinel' system request and 'structured', but does not elaborate on side effects, computational cost, or what 'scoring' entails beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the core action. No redundant phrases. However, extreme brevity leaves gaps in contextual guidance that another sentence could have addressed (e.g., relationship to validation or batch tools).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema (100% coverage), presence of output schema, and comprehensive annotations, the description adequately covers the minimal requirements. However, for a complex healthcare workflow tool with validation and batch siblings, it lacks important contextual guidance on workflow selection and prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents parameters including examples (e.g., 'healthcare.denial') and references to '/v1/score schema'. The description provides no additional parameter semantics beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Run') and resource ('Sentinel scoring request') with domain context ('healthcare workflow'). However, it fails to distinguish from sibling 'score_batch'—it does not indicate whether this handles single vs. multiple workflows or when to prefer one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings (particularly 'score_batch' for bulk operations). No mention of prerequisites such as validating the payload with 'validate_workflow_payload' first, or usage constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_feedbackSubmit claims outcome feedbackAInspect

Submit structured outcome feedback for a previous scoring event so Sentinel can track real-world claims results.

ParametersJSON Schema
NameRequiredDescriptionDefault
feedbackYesStructured feedback payload for /v1/feedback, including event identifiers and outcome details.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-readonly (write operation) and non-destructive, which the description supports with the 'Submit' verb. The description adds business context ('so Sentinel can track real-world claims results'), but omits technical behavioral details like whether this creates a feedback record, updates the original event, or validation rules for the event_id reference.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 16-word sentence with zero redundancy. Front-loaded action verb ('Submit'), followed by object, context ('for a previous scoring event'), and purpose ('so Sentinel can track...'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for complexity: one nested parameter with complete schema coverage and existing output schema. The description successfully explains the business purpose (tracking real-world results vs predicted scores). Could mention that feedback typically includes outcomes like 'approved' or 'denied' (hinted in example) but not required given schema completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed description and example. The description adds contextual meaning that the feedback relates to 'previous scoring events', helping the agent understand the feedback parameter's purpose, but does not elaborate on specific fields (event_id, outcome) beyond the schema's example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Submit' + resource 'outcome feedback' + scope 'for a previous scoring event'. The phrase 'previous scoring event' clearly distinguishes it from sibling 'score_workflow' and 'score_batch' tools (which initiate scoring), establishing this as a post-hoc feedback mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit timing context via 'previous scoring event', indicating it follows scoring operations. However, lacks explicit when-to-use guidance (e.g., 'Call after scores are finalized') and does not name specific sibling tools as alternatives or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_workflow_payloadValidate workflow payloadA
Read-onlyIdempotent
Inspect

Validate and normalize a workflow payload without consuming a scoring call.

ParametersJSON Schema
NameRequiredDescriptionDefault
payloadYesWorkflow payload to validate before sending to /v1/score or /v1/score/batch.
workflowYesWorkflow identifier such as healthcare.denial, healthcare.prior_auth, or healthcare.reimbursement.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnly/idempotent, the description adds crucial behavioral context: normalization behavior and the business-critical fact that this doesn't count against scoring quotas. This cost/quoting information is not in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 8-word sentence with zero waste. Front-loads the action (Validate/normalize), specifies the target (workflow payload), and ends with the key constraint (without consuming call).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete given rich schema and annotations. Covers validation, normalization, and cost behavior. Has output schema so return values needn't be described. Could marginally improve by noting it's a preflight check, but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters (workflow identifier and payload). The description appropriately doesn't repeat param details, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verbs 'Validate and normalize' with resource 'workflow payload'. The phrase 'without consuming a scoring call' clearly distinguishes from siblings score_workflow and score_batch by highlighting the zero-cost nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this is a pre-validation/dry-run tool ('without consuming a scoring call'), implying when to use it. However, it doesn't explicitly name score_workflow/score_batch as the alternatives or state 'use this before' for explicit workflow guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.