Skip to main content
Glama
decisionnode

decisionnode/DecisionNode

Official

Server Quality Checklist

100%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose: add creates new decisions, get retrieves a specific record by ID, list provides a complete overview, and search enables filtered queries. There is no functional overlap between the four operations.

    Naming Consistency5/5

    All tools follow a consistent snake_case verb_noun pattern (add_decision, get_decision, list_decisions, search_decisions). The singular/plural distinction in the noun aligns logically with whether the tool handles individual records or collections.

    Tool Count4/5

    Four tools is appropriate for the narrow domain of decision logging, covering creation and retrieval patterns. While functional, the set is minimalistic and could benefit from additional lifecycle management tools without being excessive.

    Completeness4/5

    The surface covers the core read and create operations necessary for a decision log, including mandatory search-before-action workflows. However, it lacks update or delete capabilities, which limits the ability to correct errors or mark decisions as superseded.

  • Average 4.2/5 across 4 of 4 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.4

  • Tools from this server were used 6 times in the last 30 days.

  • This repository includes a glama.json configuration file.

  • This server provides 9 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, placing full disclosure burden on the description. While 'List' implies read-only access, the description fails to specify return format, pagination behavior, sorting order, or performance implications of retrieving 'all' decisions. This leaves significant behavioral gaps for a data retrieval tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured sentences: first declares purpose, second declares usage context. No redundancy or filler. Information density is high with zero waste, and the description is appropriately front-loaded with the action verb.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 2-parameter tool with simple flat schema and no output schema, the description adequately covers intent and usage scenarios. However, given the absence of annotations and output schema, it should ideally describe what the tool returns (e.g., full records vs summaries, data structure) and confirm read-only safety, which it omits.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, with both 'project' and 'scope' parameters fully documented in the schema. The description mentions 'for the project' (aligning with the required parameter) but does not elaborate on the 'scope' filter capability or provide syntax examples beyond what the schema defines. Baseline 3 is appropriate given complete schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('List') and resource ('all recorded decisions') with specific scope ('for the project'). The verb 'List' and qualifier 'all' effectively distinguishes this from siblings: 'get_decision' (single retrieval), 'search_decisions' (filtered query), and 'add_decision' (creation).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use this when you need a complete overview... or when starting work on a new feature area'), giving clear context for appropriate use. Lacks explicit negative guidance or named alternatives (e.g., when to prefer 'search_decisions' over this), preventing a score of 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses specific content returned ('rationale and constraints') which hints at output structure, but lacks explicit statements about read-only safety, error handling for invalid IDs, or authentication requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficiently structured sentences with zero waste: the first states the core operation, the second provides the workflow context. Information is front-loaded and every word serves a specific purpose.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description partially compensates by mentioning specific return content ('rationale and constraints'). It adequately covers the 2 parameters with 100% schema coverage and clarifies sibling relationships, though explicit error behavior would strengthen completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 100% description coverage ('Decision ID', 'workspace folder name'), and the description references the ID parameter ('by ID') but does not add syntax details, format examples, or semantic context beyond what the schema already provides. Baseline 3 is appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb-resource pattern ('Get full details of a specific decision') and explicitly distinguishes from sibling tools by referencing search_decisions in the usage workflow, clarifying this is for ID-based retrieval versus searching.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance ('Use this after search_decisions returns relevant results') establishing the tool's position in a two-step workflow (search then get details), effectively guiding the agent away from using it as a primary discovery tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It mentions persistence ('future context') and implies conflict detection exists (via the 'force' parameter reference to conflicts), but fails to explain the conflict resolution flow, idempotency, or what the tool returns when conflicts are detected. Adequate but incomplete for a write operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with bold front-loading ('Call this IMMEDIATELY'), followed by trigger phrases, numbered scenarios, and timing guidance. Every sentence serves a distinct purpose (triggers, scenarios, timing, content). Slightly verbose but no waste given the complexity of the triggering conditions.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 7-parameter schema with 100% coverage and no output schema, the description provides strong contextual completeness for when/why to invoke. Minor gap: could briefly acknowledge the conflict detection behavior implied by the 'force' parameter, though this is partially covered in the schema itself.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While schema coverage is 100%, the description adds semantic value by mapping usage concepts to parameters: 'Focus on WHY' guides the rationale parameter, 'Clear statement' guides the decision parameter, and the trigger scenarios guide the scope parameter. This helps the agent understand not just what parameters exist, but how to populate them based on conversation context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the tool captures/creates decisions when users express preferences or make technical choices, using specific trigger phrases ('Let's use...', 'Always do...'). It clearly distinguishes this creation tool from siblings (get_decision, list_decisions, search_decisions) which are retrieval operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Excellent explicit guidance: lists specific trigger phrases, enumerates five concrete scenarios (design patterns, architectural choices, coding standards, UI/UX conventions, technology stack), specifies timing ('DURING the conversation, not after'), and content focus ('Focus on WHY, not just WHAT').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full burden and successfully conveys the mandatory workflow gate behavior, consequences of skipping ('inconsistency and wasted rework'), and conditional logic based on results. Minor gap: does not describe return value structure despite no output schema being present.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well front-loaded with critical 'MANDATORY' warning. Structure flows logically: imperative -> triggers -> examples -> result handling. Slightly heavy on ALL CAPS emphasis and repetition ('ANY' appears multiple times), though justified given the tool's gatekeeper role in preventing rework.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a 3-parameter tool with no output schema. Covers workflow integration, cross-tool dependencies (list_projects), and result handling. Would benefit from brief mention of what the return value contains (e.g., 'returns matching decision records') to fully compensate for missing output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema description coverage (baseline 3), description adds substantial value: concrete query examples ('button styling', 'API design') and extraction logic for the 'project' parameter ('Extract this from the user's active file path'), plus cross-reference to list_projects. Exceeds baseline significantly.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states the tool checks for 'existing conventions' and 'decisions' before code changes. It clearly distinguishes from siblings (add_decision, get_decision, list_decisions) by emphasizing its search/query nature and mandatory pre-flight role in the workflow.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use with exhaustive trigger list ('add a feature, modify code, fix a bug...'), mandatory ordering ('Call this FIRST'), and clear alternative actions ('If no decisions exist, proceed freely; if decisions exist, FOLLOW them'). Also correctly references sibling tool list_projects as prerequisite when project is uncertain.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

DecisionNode MCP server

Copy to your README.md:

Score Badge

DecisionNode MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/decisionnode/DecisionNode'

If you have feedback or need assistance with the MCP directory API, please join our Discord server