Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools occupy distinct functional roles: ingest is the sole write path for biometric data, while get_human_state (current), get_session_history (temporal), and get_trigger_memory (cross-session psychological) provide clearly differentiated read paths. Minor overlap exists since ingest returns the same state fields as get_human_state, but input requirements prevent misselection.

    Naming Consistency3/5

    Three tools follow a consistent get_<resource> pattern (get_human_state, get_session_history, get_trigger_memory), but ingest breaks convention by using a bare verb without a noun object and lacking the get_ prefix. This creates an asymmetry between read operations (get_*) and the single write operation.

    Tool Count4/5

    Four tools is well-suited for this narrow biometric monitoring domain, covering data ingestion, current state retrieval, historical trend analysis, and persistent trigger profiling without unnecessary bloat. The surface is minimal but sufficient for the stated conversational adaptation purpose.

    Completeness3/5

    The core monitoring loop is functional (ingest → read states), but notable gaps exist in lifecycle management: no tool to manually mark triggers as resolved/unresolved (despite the API tracking this distinction), delete session data, or explicitly initialize sessions. Agents can observe psychological states but cannot administratively manage the underlying profiles.

  • Average 4.4/5 across 4 of 4 tools scored. Lowest: 3.9/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v4.0.1

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Strong disclosure given zero annotations: describes exact return structure (timestamped datapoints with stress_score/state/heart_rate), trend values (rising/falling/stable), constraints (default: 5, max: 60), and critical disclaimer ('Not a medical device'). Carries full burden effectively.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure: purpose → return values → parameter usage → use case → disclaimer. Every sentence delivers distinct information without redundancy. Appropriate length for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema and no annotations, the description adequately compensates by detailing return structure, data fields, and behavioral constraints. Missing only minor details like rate limits or pagination, which are optional for this complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description compensates well by explaining 'minutes' semantics (lookback window, default 5, max 60). 'session_id' is not explicitly described but is intuitively linked to 'session' in the first sentence and is self-documenting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb-resource combination ('Get state history') with temporal scope ('over time'). However, it doesn't explicitly differentiate from sibling 'get_human_state' (likely current state vs. historical), which could cause agent confusion when selecting between them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear use case ('detecting stress patterns during a conversation') and parameter guidance for the lookback window. Lacks explicit when-not-to-use guidance or comparison to siblings like 'get_human_state' or 'get_trigger_memory'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden and excels: it explains the psychological model (active vs. resolved triggers), discloses the data structure returned (observation_count, avg_score, peak_score, last_seen), notes the dependency chain, and includes a medical disclaimer ('Not a medical device').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure: opening purpose statement, bulleted definitions of key concepts, data field enumeration, prerequisites, and disclaimer. Every sentence provides unique value. No redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive despite no output schema and no annotations. Explains the return structure in detail (trigger types and their metrics), covers domain-specific safety considerations, and documents operational dependencies. Complete for this complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% (subject_id lacks description), so the description must compensate. It mentions subject_id in the prerequisite context ('Requires prior ingest calls with the same subject_id'), implying it must match previous calls, but does not explicitly define what subject_id represents or its format. Adequate but minimal compensation for the schema gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Retrieve', 'Returns') and clearly identifies the resource as a 'psychological trigger profile'. It distinguishes from sibling 'ingest' by being retrieval-focused rather than input-focused, and differentiates from 'get_human_state' by focusing on historical trigger patterns rather than current state.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states the prerequisite 'Requires prior ingest calls with the same subject_id', establishing when the tool is usable. Provides guidance on interpreting results ('Tread carefully' vs 'Safe to explore deeper'). Could be improved by explicitly contrasting with 'get_session_history' for when to use each.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses fusion behavior ('API fuses all signals into one state'), return value composition (mirrors get_human_state plus extras), confidence scoring dependency on source_device, and critical safety disclaimer ('Not a medical device'). Minor gap on rate limits or error handling.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellently structured with clear hierarchy: core purpose → requirements → signal taxonomy (bullet points with units/enums) → trigger memory context → return values → safety note. No wasted sentences; front-loaded with essential action statement.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex 37-parameter biometric tool with no output schema, description adequately explains return structure by referencing sibling tool fields plus additions. Covers required vs optional fields, units, enums, and safety scope. Would need error handling or rate limit notes for a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage (titles only). Description richly compensates by documenting: timestamp format (ISO 8601), heart_rate units/range (bpm, 30-220), tone enum values (calm|tense|anxious|hostile), sentiment range (-1.0 to 1.0), subject_id hashing requirement, and source_device impact on confidence scoring.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb-resource pair ('Send biometric signals... get unified state'). Distinguishes from sibling get_human_state by noting it 'Returns same fields as get_human_state plus signals_received list and topics_detected', clarifying this is the write/ingest path versus the read path.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear contextual guidance on when to include optional parameters ('For trigger memory... Include subject_id... user_message + ai_response'). Implicitly contrasts with get_human_state via return value description, though lacks explicit 'when not to use' exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It explains state meanings, confidence calculation methodology ('based on signal quality and device type'), conditional return fields ('on 2nd+ call'), and self-improvement loops ('Use this to self-improve'). No contradictions with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly structured with front-loaded purpose, followed by Returns bullet points, usage adaptation rules, and prerequisites. Every sentence provides unique value—no repetition of schema structure or tautology. Dense but scannable.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, so the description comprehensively documents all return fields (state, stress_score, confidence, suggested_action, adaptation_effectiveness) including their formats and semantic meanings. Covers behavioral adaptation, medical disclaimers, and data prerequisites—fully complete for this complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage for the single 'session_id' parameter. While the description mentions 'for a session' and implies the parameter through context, it does not explicitly document what the session_id represents or where to obtain it (beyond the ingest prerequisite). Partial compensation for the schema gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Get') and resource ('unified human state') scoped to 'a session'. It clearly differentiates from sibling tool 'ingest' by stating it 'Requires a prior ingest call to have data', establishing the dependency chain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use ('Call this before generating important responses'), prerequisites ('Requires a prior ingest call'), and detailed response adaptation rules ('calm/relaxed = full complexity...acute_stress = one grounding sentence only'). Also includes necessary disclaimers ('Not a medical device').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

nefesh-mcp-server MCP server

Copy to your README.md:

Score Badge

nefesh-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nefesh-ai/nefesh-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server