Skip to main content
Glama

Nefesh — Real-Time Human State Awareness for AI

Server Details

Fuses biometric signals into a stress score (0-100) for AI adaptation. MCP + A2A native.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
nefesh-ai/nefesh-mcp-server
GitHub Stars
0
Server Listing
nefesh-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: check_api_key_status handles API key verification polling, get_human_state retrieves current state, get_session_history provides historical data, get_trigger_memory accesses psychological profiles, ingest sends biometric data, and request_api_key initiates key requests. The descriptions reinforce these distinct roles, making tool selection unambiguous.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern (e.g., check_api_key_status, get_human_state, ingest, request_api_key). The verbs are clear and appropriate (check, get, ingest, request), and the snake_case style is uniformly applied throughout, making the tool set predictable and easy to navigate.

Tool Count5/5

With 6 tools, this server is well-scoped for real-time human state awareness, covering core workflows like data ingestion, state retrieval, history analysis, trigger profiling, and API key management. Each tool earns its place without redundancy, and the count is appropriate for the domain's complexity, avoiding being too thin or bloated.

Completeness4/5

The tool set provides comprehensive coverage for the domain, including data ingestion, real-time state access, historical analysis, and psychological profiling, with no dead ends. A minor gap exists in lacking explicit update or delete operations for sessions or triggers, but agents can work around this by managing data through ingest and retrieval, and the API key flow is fully covered.

Available Tools

6 tools
check_api_key_statusAInspect

Check the status of a pending API key request.

Use the exact same email the user provided to request_api_key.
Poll this every 10 seconds. Once the user clicks the verification
link in their inbox, status changes from 'pending' to 'ready'
and the response includes the API key. Store it immediately —
returned once only. Expires after 15 minutes.
ParametersJSON Schema
NameRequiredDescriptionDefault
request_idYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden effectively. It documents return values ('pending' vs 'ready'), critical destructive/irreversible behavior ('returned once only'), time constraints (15-minute expiry), and storage requirements ('store it immediately').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose and prerequisites. Every sentence delivers critical operational information (return values, polling interval, expiration, storage warning). No redundant text despite covering complex stateful workflow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet description fully documents return values, status codes, and response handling. Covers complete lifecycle from initial poll through expiration, sufficient for an autonomous agent to implement correct polling logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (parameter has only title, no description). While the workflow context ('after calling request_api_key') implicitly suggests the request_id parameter's purpose, the description does not explicitly define what request_id is or that it must come from the sibling tool. Compensates partially but leaves gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Poll' with resource 'API key activation' and explicitly links to sibling tool 'request_api_key' to establish clear scope. It distinguishes this checking/polling function from the initial request operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('after calling request_api_key'), prerequisites ('No existing API key needed'), and operational protocol ('Poll every 10 seconds. Expires after 15 minutes'). Provides complete workflow guidance including handling of terminal states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_human_stateAInspect

Get current unified human state for a session. Call this before generating important responses.

Returns:
- state: calm | relaxed | focused | stressed | acute_stress
- stress_score: 0-100 (lower = calmer)
- confidence: 0.0-1.0 (based on signal quality and device type)
- suggested_action: maintain_engagement | simplify_and_focus | de-escalate_and_shorten | pause_and_ground
- action_reason: human-readable explanation of why this action was suggested
- adaptation_effectiveness (on 2nd+ call): shows whether your previous suggested_action actually reduced stress — contains previous_action, stress_delta, and effective boolean. Use this to self-improve.

Use suggested_action to adapt your response: calm/relaxed = full complexity, focused = shorter and structured, stressed = max 2 sentences, acute_stress = one grounding sentence only.

Requires a prior ingest call to have data. Not a medical device.
ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure comprehensively. It documents the complete return structure including enums and ranges (state types, 0-100 scores), explains the confidence scoring methodology ('based on signal quality'), discloses the feedback loop mechanism ('adaptation_effectiveness on 2nd+ call'), and includes critical safety limitations ('Not a medical device').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear information hierarchies: purpose declaration, detailed return value documentation, usage guidelines, and prerequisites. Every sentence provides actionable information, from specific stress state mappings to response strategies, with no redundant or filler content despite the comprehensive length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and an output schema, the description achieves comprehensive coverage by documenting all six return fields with their types, ranges, and semantic meanings. It also addresses the tool's stateful nature, data dependencies, and self-improvement mechanisms, providing sufficient context for correct invocation and interpretation without external schema support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 0% description coverage for the single 'session_id' parameter, the description compensates partially by referencing 'for a session' in the purpose statement, implying the parameter's function. However, it does not explicitly name or describe the parameter's format or constraints, leaving some semantic gaps that require inference rather than explicit documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Get' and resource 'unified human state' scoped 'for a session,' clearly identifying the operation. It distinguishes itself from sibling tools like 'ingest' by explicitly noting the prerequisite 'Requires a prior ingest call to have data,' clarifying this tool retrieves processed state rather than creating it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit temporal guidance ('Call this before generating important responses') and prerequisites ('Requires a prior ingest call to have data'). The description details exactly how to interpret and act on each output state, including specific response length constraints for different stress levels ('stressed = max 2 sentences'), leaving no ambiguity about when and how to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_session_historyAInspect

Get state history for a session over time.

Returns timestamped datapoints with stress_score, state, and heart_rate for each observation.
Includes an overall trend: rising | falling | stable.

Use minutes parameter to control the lookback window (default: 5, max: 60).
Useful for detecting stress patterns during a conversation. Not a medical device.
ParametersJSON Schema
NameRequiredDescriptionDefault
minutesNo
session_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds in explaining the return structure (timestamped datapoints with stress_score, state, heart_rate), computed aggregations (trend enum), and safety constraints (non-medical). It also discloses the max constraint (60) not present in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences with zero waste: purpose, return structure, trend logic, parameter guidance, and use case/disclaimer. Information is front-loaded with the core action stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and 0% parameter description coverage, the description comprehensively covers what is returned (datapoints structure, trend), input constraints, and legal disclaimers. It is complete for a 2-parameter read-only tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It fully explains the 'minutes' parameter semantics ('lookback window') and constraints (default: 5, max: 60), adding critical context absent from the schema. The 'session_id' parameter is self-evident and requires minimal explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Get state history for a session over time,' clearly identifying the verb (get) and resource (session state history). It distinguishes itself from sibling 'get_human_state' (likely current state) by emphasizing 'over time' and 'timestamped datapoints,' establishing the temporal/historical scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive guidance ('Useful for detecting stress patterns during a conversation') and negative constraints ('Not a medical device'). While it doesn't explicitly name sibling alternatives, the functional scope (historical trends vs. current state) is distinct enough to guide selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trigger_memoryAInspect

Retrieve psychological trigger profile for a subject.

Returns which conversation topics consistently cause stress (active triggers) and which have been resolved over time.

- active triggers: topics where stress was elevated across multiple sessions. Tread carefully.
- resolved triggers: topics where stress has decreased. Safe to explore deeper.

Each trigger includes observation_count, avg_score, peak_score, and last_seen.

Requires prior ingest calls with the same subject_id. Not a medical device.
ParametersJSON Schema
NameRequiredDescriptionDefault
subject_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and successfully discloses data dependencies (requires prior ingest), output structure details (observation_count, avg_score, peak_score, last_seen), and domain limitations ('Not a medical device'). Missing explicit error conditions or idempotency guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with purpose front-loaded, followed by return value semantics, bullet-pointed behavioral interpretations, prerequisites, and disclaimer. No redundant or wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero annotations and no output schema, the description adequately compensates by detailing return structure and behavioral interpretation. The medical disclaimer addresses domain complexity. Minor gap in explicit error handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (only 'title': 'Subject Id' exists). The description mentions subject_id in the prerequisite clause ('same subject_id'), implying it is a persistent identifier, but does not fully compensate by describing format, constraints, or validation rules for the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-resource pair ('Retrieve psychological trigger profile') and clearly distinguishes from sibling tool 'ingest' by stating it requires prior ingest calls, positioning it as the read counterpart to a write operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite ('Requires prior ingest calls with the same subject_id') and provides interpretive guidance ('Tread carefully' vs 'Safe to explore deeper'). Lacks explicit naming of alternatives or absolute contraindications, though the medical disclaimer implies usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ingestAInspect

Send biometric signals from any sensor, get unified state back.

Required: session_id + timestamp (ISO 8601) + at least one signal.
Send whatever you have — the API fuses all signals into one state.

Common signals (highest impact):
- heart_rate (bpm, 30-220) + rmssd (ms) — cardiovascular
- tone: calm | tense | anxious | hostile — vocal
- sentiment: -1.0 to 1.0 — textual
- expression: relaxed | neutral | tense — visual

For trigger memory (cross-session psychological tracking):
- Include subject_id (consistent per user, hashed)

Returns same fields as get_human_state plus signals_received list and topics_detected (if conversation text was included).

source_device is optional but improves confidence scoring. Not a medical device.
ParametersJSON Schema
NameRequiredDescriptionDefault
edaNo
gazeNo
sdnnNo
spo2No
toneNo
pnn50No
rmssdNo
postureNo
urgencyNo
mean_ibiNo
ibi_countNo
sentimentNo
timestampYes
confidenceNo
engagementNo
expressionNo
heart_rateNo
session_idYes
subject_idNo
sleep_stageNo
speech_rateNo
stress_scoreNo
glucose_mg_dlNo
glucose_trendNo
source_deviceNo
activity_levelNo
cognitive_loadNo
eeg_beta_powerNo
glucose_mmol_lNo
eeg_alpha_powerNo
eeg_theta_powerNo
respiratory_rateNo
skin_temperatureNo
pitch_variabilityNo
steps_last_minuteNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and discloses fusion behavior ('API fuses all signals'), return value composition (get_human_state fields plus extras), confidence scoring mechanics (source_device improves confidence), and critical safety context ('Not a medical device'). Missing only operational details like rate limits or error behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with front-loaded requirements, followed by flexible usage guidance, bulleted high-impact signals, special feature explanation, and safety disclaimer. Every sentence delivers unique value; no redundancy with schema titles.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex 35-parameter ingestion tool despite lacking output schema. Explains domain-specific concepts (biometric fusion, trigger memory, confidence scoring) and cross-tool relationships. Only gap is absence of error handling or validation behavior documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Provides units and ranges for key parameters (heart_rate 30-220 bpm, rmssd in ms, sentiment -1.0 to 1.0), enum values not present in schema (tone: calm|tense|anxious|hostile), and semantic purpose for subject_id (hashed cross-session tracking) and source_device (confidence scoring).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (send biometric signals) and outcome (get unified state) with clear resource identification. Distinguishes from sibling get_human_state by noting it returns the same fields plus additional data, establishing the write-vs-read relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists required parameters (session_id, timestamp, at least one signal) and identifies when to use optional subject_id (for cross-session trigger memory). References sibling get_human_state to explain return values, though could more explicitly state when to use the sibling instead (retrieval without ingestion).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_api_keySInspect

Request a free Nefesh API key. No existing API key needed for this call.

IMPORTANT: You MUST ask the user for their real email address before
calling this tool. Do NOT invent, guess, or generate an email address.
The user will receive a verification link they must click to activate
the key. Without clicking that link, no API key will be issued.
Disposable or temporary email services are blocked.

Example prompt to the user: "What is your email address? You will
receive a verification link to activate your free API key."

Flow: call this with the user's real email, then poll
check_api_key_status every 10 seconds until status is 'ready'.

Free tier: 1,000 calls/month, all signal types, 10 req/min. No credit card.
ParametersJSON Schema
NameRequiredDescriptionDefault
emailYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden successfully: it reveals the asynchronous nature (email link + polling), specific polling interval (10 seconds), rate limits (10 req/min), quota (1,000 calls/month), and that no credit card is required. It also discloses the return value (request_id) despite no output schema existing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: purpose/prerequisites, flow diagram using arrows, and tier limits. Information is front-loaded and structured logically for immediate comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Exceptionally complete for an async provisioning tool with no output schema: covers the full lifecycle from request through polling completion, including rate limiting and quota constraints that would typically appear in annotations or external documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (email parameter undocumented). While the flow description implies the email receives the confirmation link, the description does not explicitly document the parameter's semantics, format requirements, or purpose statement. Partial compensation through workflow context warrants baseline minimum viability.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Request a free Nefesh API key' (verb + resource) and distinguishes itself from sibling tool check_api_key_status by establishing this as the entry point in the workflow versus the polling mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite condition 'No existing API key needed for this call' and provides a clear step-by-step flow explicitly naming the sibling tool check_api_key_status as the required next step for polling, eliminating ambiguity about when to use each tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.