Skip to main content
Glama

Server Details

Deploy production REST APIs from JSON schemas in seconds. Manage projects, schemas, and deployments.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
rationalbloks/rationalbloks-mcp
GitHub Stars
1

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

44 tools
bulk_create_graph_nodesBulk Create Graph NodesAInspect

Create multiple nodes at once (up to 500 per call). Uses Neo4j UNWIND for high performance.

Essential for knowledge graph population — create hundreds of entities from a single book chapter or article.

Each node needs: entity_id (unique string) and data (properties dict).

Example: entity_type: "concept" nodes: [ {"entity_id": "quantum-mechanics-001", "data": {"name": "Quantum Mechanics", "field": "Physics"}}, {"entity_id": "wave-function-001", "data": {"name": "Wave Function", "field": "Physics"}}, {"entity_id": "superposition-001", "data": {"name": "Superposition", "field": "Physics"}} ]

ParametersJSON Schema
NameRequiredDescriptionDefault
nodesYesList of nodes. Each: {entity_id: string, data: {properties}}
project_idYesProject ID (UUID)
entity_typeYesEntity key for all nodes
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral constraints beyond annotations: the 500-node limit and UNWIND implementation detail (performance characteristics). Annotations cover idempotentHint=false and readOnlyHint=false, so description doesn't need to restate safety profile. Could improve by mentioning partial failure behavior or collision handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured: opens with limits/capacity (500), follows with use case context, then parameter requirements, closes with concrete example. Every sentence serves distinct purpose. No redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high schema coverage and good annotations, description successfully covers batch constraints, performance characteristics, and provides working example. Only gap is lack of output description (success confirmation format, partial failure indicators) since no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), description adds meaningful constraints: entity_id must be 'unique string' and data contains 'properties dict'—semantics not explicit in raw schema. Concrete JSON example provides syntax guidance beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Create multiple nodes at once (up to 500 per call)' with specific verb, resource, and scope constraints. Distinguishes from sibling 'create_graph_node' through 'bulk' language, quantity limit, and mention of 'Neo4j UNWIND for high performance.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance with 'Essential for knowledge graph population — create hundreds of entities from a single book chapter or article,' indicating the bulk use case. However, lacks explicit contrast with single-node creation (e.g., 'use when creating more than X nodes') or warnings about non-idempotent behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bulk_create_graph_relationshipsBulk Create Graph RelationshipsAInspect

Create multiple relationships at once (up to 500 per call). Uses Neo4j UNWIND for high performance.

Essential for connecting knowledge — link hundreds of concepts, people, and events in one operation.

Each relationship needs: from_id, to_id, and optional data (properties).

Example: rel_type: "related_to" relationships: [ {"from_id": "quantum-mechanics-001", "to_id": "wave-function-001", "data": {"strength": "strong"}}, {"from_id": "quantum-mechanics-001", "to_id": "superposition-001", "data": {"strength": "strong"}} ]

ParametersJSON Schema
NameRequiredDescriptionDefault
rel_typeYesRelationship key for all relationships
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
relationshipsYesList of relationships. Each: {from_id, to_id, data?}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: specifies the 500-item limit, discloses implementation method ('Uses Neo4j UNWIND'), and characterizes performance ('high performance'). Annotations indicate it's a non-destructive write (destructiveHint: false, readOnlyHint: false), which the description's 'Create' verb aligns with; no contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: opens with capacity limit and performance, follows with use case value proposition, then parameter requirements, then concrete example. Every sentence earns its place; no redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for input requirements given the complexity (nested array parameter). Specifies the critical 500-item constraint. No output schema exists; while return value description would be nice, the input documentation is complete enough for successful invocation. Does not mention partial failure behavior, which prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds meaning via the concrete JSON example showing exact structure of the `relationships` array items, and clarifies that `data` contains 'properties' and is optional. This example bridges the gap between schema definition and actual invocation format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Create') + resource ('relationships') + scope ('multiple at once', 'up to 500'). Explicitly distinguishes from sibling `create_graph_relationship` via bulk terminology and volume limits, and from `bulk_create_graph_nodes` by specifying relationships not nodes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates clear context for use ('Essential for connecting knowledge', 'link hundreds of concepts... in one operation' and 'up to 500 per call'). Lacks explicit 'when-not-to-use' contrast with singular `create_graph_relationship`, but the volume constraint implicitly guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_graph_nodeCreate Graph NodeAInspect

Create a single node in a deployed graph project.

REQUIRES: Project must be deployed (use deploy_graph_staging first).

The entity_type must match an entity key from the project schema. Use get_graph_data_schema to see available entity types and their fields.

Example: entity_type: "person" entity_id: "alan-turing-001" data: {"name": "Alan Turing", "birth_year": 1912, "field": "Computer Science"}

The entity_id is your unique identifier — use meaningful IDs for knowledge graphs.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesNode properties matching the entity schema
entity_idYesUnique identifier for the node
project_idYesProject ID (UUID)
entity_typeYesEntity key (e.g., 'person', 'concept')
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral constraints beyond annotations: requires deployment state, enforces schema validation ('entity_type must match'), and explains ID uniqueness semantics ('entity_id is your unique identifier'). Annotations cover read/write/destructive hints; description adds deployment and schema constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient structure: purpose → prerequisites → constraints → example → ID guidance. Every sentence earns its place. REQUIRES label clearly flags dependencies. Example JSON is illustrative without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of the creation workflow (deploy → schema → create) given no output schema exists. Explains validation rules and ID strategy. Minor gap: doesn't describe success behavior or error modes (e.g., duplicate entity_id handling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), description adds concrete examples (entity_type: 'person', entity_id: 'alan-turing-001') and semantic guidance ('use meaningful IDs for knowledge graphs'). Explains that data properties must match entity schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Create' + resource 'single node' + context 'in a deployed graph project'. Explicitly contrasts with sibling tools by emphasizing 'single' (vs bulk_create_graph_nodes) and requiring deployment (vs create_graph_project).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite 'Project must be deployed (use deploy_graph_staging first)' and dependency 'Use get_graph_data_schema to see available entity types'. Clear workflow guidance. Minor gap: doesn't explicitly recommend bulk_create_graph_nodes for multiple nodes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_graph_projectCreate Graph ProjectAInspect

Create a new Neo4j graph database project from a hierarchical JSON schema.

⚠️ GRAPH SCHEMA FORMAT — READ BEFORE CREATING:

Graph schemas define nodes (entities) and relationships, NOT flat database tables. Each field is a dict with "type" and optional "required": true (defaults to false).

SCHEMA STRUCTURE: { "nodes": { "EntityName": { "description": "What this entity represents", "flat_labels": ["AdditionalLabel"], "schema": { "field_name": {"type": "string", "required": true}, "other_field": {"type": "integer"} } } }, "relationships": { "RELATIONSHIP_TYPE": { "from": "EntityName", "to": "OtherEntity", "cardinality": "MANY_TO_MANY", "data_schema": { "field_name": {"type": "date"} } } } }

FIELD TYPES: string, integer, float, boolean, date, json

CARDINALITY OPTIONS: ONE_TO_ONE, ONE_TO_MANY, MANY_TO_ONE, MANY_TO_MANY

HIERARCHICAL NODES: Nest entities inside parent entities to create type hierarchies. Child entities inherit parent labels automatically.

Example: { "nodes": { "Animal": { "description": "Base animal entity", "flat_labels": ["LivingThing"], "schema": { "name": {"type": "string", "required": true}, "habitat": {"type": "string"} }, "Dog": { "description": "A dog (inherits Animal labels)", "flat_labels": ["Pet"], "schema": { "breed": {"type": "string", "required": true}, "trained": {"type": "boolean"} } } } }, "relationships": { "OWNS": { "from": "Person", "to": "Animal", "cardinality": "ONE_TO_MANY" } } }

RULES:

  1. "nodes" key is REQUIRED — must contain at least one entity

  2. Each entity needs "description" and "schema" with field definitions

  3. Each field is {"type": "...", "required": true/false} — required defaults to false

  4. Relationship "from"/"to" must reference defined node names

  5. Relationship types should be UPPER_SNAKE_CASE

  6. Entity names should be PascalCase

  7. Automatic fields (id, created_at, updated_at) are NOT needed

  8. Use get_graph_template_schemas FIRST to see valid examples

WORKFLOW:

  1. Use get_graph_template_schemas to see valid examples

  2. Create schema following the rules above

  3. Call this tool

  4. Monitor with get_job_status (2-5 min deployment)

After creation, use get_job_status with returned job_id to monitor deployment.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProject name
schemaYesGraph schema with 'nodes' and optionally 'relationships' keys. Use get_graph_template_schemas to see valid examples.
descriptionNoOptional project description
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains the 2-5 minute deployment time, the monitoring requirement via get_job_status, and provides detailed schema format rules. Annotations already indicate this is a non-destructive, non-idempotent, open-world write operation, but the description enriches this with practical deployment details and schema constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose, but it's quite lengthy with extensive schema documentation that might be better placed in a separate guide. While all content is relevant, it could be more concise by focusing on usage rather than duplicating schema specification details that might belong in parameter documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of creating a graph database project, the description provides complete guidance including prerequisites, schema format rules, examples, workflow steps, and post-creation monitoring. It compensates for the lack of output schema by explaining what happens after creation (returns job_id for monitoring). The combination of detailed schema explanation and workflow guidance makes it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds substantial value by explaining the complex 'schema' parameter structure in detail, including field types, cardinality options, hierarchical nodes, and providing a comprehensive example. However, it doesn't add specific meaning to the 'name' or 'description' parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new Neo4j graph database project from a hierarchical JSON schema, specifying both the verb ('create') and resource ('Neo4j graph database project'). It distinguishes from siblings like 'create_project' by specifying it's for graph databases, and from 'create_graph_node' by creating entire projects rather than individual nodes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: 'Use get_graph_template_schemas FIRST to see valid examples' and 'After creation, use get_job_status with returned job_id to monitor deployment.' It also distinguishes when to use this tool versus alternatives by specifying it's for creating projects from schemas, not for creating individual nodes/relationships or deploying projects.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_graph_relationshipCreate Graph RelationshipAInspect

Create a relationship between two nodes in a deployed graph project.

The rel_type must match a relationship key from the project schema. Use get_graph_data_schema to see available relationship types.

Example: rel_type: "authored" from_id: "alan-turing-001" to_id: "on-computable-numbers-001" data: {"year": 1936}

The from_id and to_id must be entity_ids of existing nodes.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataNoRelationship properties (optional)
to_idYesTarget node entity_id
from_idYesSource node entity_id
rel_typeYesRelationship key (e.g., 'authored', 'related_to')
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide the safety profile (readOnly=false, destructive=false), while the description adds critical behavioral context: the graph must be 'deployed' and target nodes must exist (validation constraints). It implies write-once semantics consistent with idempotentHint=false without explicitly stating it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured: purpose first, schema constraint second, helper tool reference third, concrete example fourth, validation rule last. Every sentence delivers unique actionable information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 6-parameter creation tool with no output schema, the description adequately covers input prerequisites and validation rules. Minor gap: doesn't describe the return value or error conditions when validation fails, though this is partially mitigated by the openWorldHint annotation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds semantic value through the concrete example showing realistic ID formats and relationship types, plus crucial constraints (rel_type schema validation, existing entity_id requirement) that explain parameter interdependencies beyond isolated field definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-resource combination ('Create a relationship between two nodes') and specifies the context ('in a deployed graph project'), clearly distinguishing it from sibling tools like create_graph_node or bulk_create_graph_relationships.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite constraint that 'rel_type must match a relationship key from the project schema' and directly references the sibling tool get_graph_data_schema for lookup. Also clarifies that from_id/to_id must be existing entity_ids, effectively defining when-not-to-use (when nodes don't exist).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_projectCreate ProjectAInspect

Create a new RationalBloks project from a JSON schema.

⚠️ CRITICAL RULES - READ BEFORE CREATING SCHEMA:

  1. FLAT FORMAT (REQUIRED): ✅ CORRECT: {users: {email: {type: "string", max_length: 255}}} ❌ WRONG: {users: {fields: {email: {type: "string"}}}} DO NOT nest under 'fields' key!

  2. FIELD TYPE REQUIREMENTS: • string: MUST have "max_length" (e.g., max_length: 255) • decimal: MUST have "precision" and "scale" (e.g., precision: 10, scale: 2) • datetime: Use "datetime" NOT "timestamp" • ALL fields: MUST have "type" property

  3. AUTOMATIC FIELDS (DON'T define): • id (uuid, primary key) • created_at (datetime) • updated_at (datetime)

  4. USER AUTHENTICATION: ❌ NEVER create "users", "customers", "employees" tables with email/password ✅ USE built-in app_users table

    Example: { "employee_profiles": { "user_id": {type: "uuid", foreign_key: "app_users.id", required: true}, "department": {type: "string", max_length: 100} } }

  5. AUTHORIZATION: Add user_id → app_users.id to enable "only see your own data"

    Example: { "orders": { "user_id": {type: "uuid", foreign_key: "app_users.id"}, "total": {type: "decimal", precision: 10, scale: 2} } }

  6. FIELD OPTIONS: • required: true/false • unique: true/false • default: any value • enum: ["val1", "val2"] • foreign_key: "table.id"

AVAILABLE TYPES: string, text, integer, decimal, boolean, uuid, date, datetime, json, uuid_array, integer_array, text_array, float_array

Array types store PostgreSQL native arrays with automatic GIN indexing: • uuid_array: UUID[] — for sets of references (e.g., tensor coordinates) • integer_array: BIGINT[] — for dimension indices, integer sets • text_array: TEXT[] — for tags, categories, label sets • float_array: DOUBLE PRECISION[] — for weight vectors, scores GIN-indexed operators: @> (contains), <@ (contained_by), && (overlaps)

BACKEND ENGINE: • python (default): FastAPI backend — mature, full-featured • rust: Axum backend — faster cold starts, lower memory, high performance

WORKFLOW:

  1. Use get_template_schemas FIRST to see valid examples

  2. Create schema following ALL rules above

  3. Call this tool (optionally choose backend_type: "python" or "rust")

  4. Monitor with get_job_status (2-5 min deployment)

After creation, use get_job_status with returned job_id to monitor deployment.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesProject name
schemaYesJSON schema in FLAT format (table_name → field_name → properties). Every field MUST have a 'type' property. Use get_template_schemas to see valid examples.
descriptionNoOptional project description
backend_typeNoBackend engine: 'python' (FastAPI, default) or 'rust' (Axum, faster). Default: python
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it details critical rules (e.g., flat format, field type requirements, automatic fields), warns against creating certain tables, explains authorization needs (user_id linking), describes backend options (python/rust), and notes deployment time (2-5 minutes). Annotations cover basic hints (e.g., not read-only), but the description enriches this with practical constraints and outcomes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the purpose but becomes lengthy due to extensive rules and examples. While all content is relevant, it could be more streamlined; for instance, some details (e.g., array type explanations) might be condensed. However, it avoids redundancy and each section serves a clear purpose in guiding usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (creating projects with JSON schemas), no output schema, and rich annotations, the description is highly complete. It covers prerequisites (use get_template_schemas), detailed input requirements, behavioral rules, backend options, workflow steps, and post-creation actions (monitoring with get_job_status), leaving no significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by elaborating on the 'schema' parameter with detailed rules (e.g., flat format, field types, automatic fields) and referencing get_template_schemas for examples. It also clarifies 'backend_type' options and default, though it doesn't deeply explain 'name' or 'description' beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create a new RationalBloks project from a JSON schema.' It specifies the verb ('Create'), resource ('RationalBloks project'), and input source ('JSON schema'), distinguishing it from siblings like create_graph_project or update_schema by focusing on project creation from schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: it instructs to 'Use get_template_schemas FIRST to see valid examples' before calling this tool, specifies a workflow (steps 1-4), mentions monitoring with get_job_status, and implicitly distinguishes from alternatives by detailing schema rules unique to this tool, such as avoiding nested 'fields' keys.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_graph_nodeDelete Graph NodeA
Destructive
Inspect

Delete a node and all its relationships from a deployed graph project. ⚠️ This also removes all relationships connected to this node (DETACH DELETE).

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idYesThe node's entity_id
project_idYesProject ID (UUID)
entity_typeYesEntity key (e.g., 'person', 'concept')
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral detail beyond annotations: explicitly clarifies the 'DETACH DELETE' cascade behavior (removing all connected relationships) which elaborates on destructiveHint=true. Mentions 'deployed' context hinting at openWorldHint. Could improve by noting idempotent behavior (calling twice fails) or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Optimal structure: two sentences with zero waste. Front-loaded with action and target; second sentence provides essential warning with visual emoji indicator. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a destructive mutation tool with cascade effects. Covers the essential cascade warning that output schema omission would require. Could marginally improve by noting error behavior on duplicate deletion (idempotent=false) or specific environment impacts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds minimal param semantics beyond schema ('deployed graph project' contextualizes project_id), but does not explain parameter syntax, valid entity_type values, or environment implications beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states the verb (Delete), resource (node), scope (all its relationships), and context (deployed graph project). The cascade behavior distinguishes it from sibling `delete_graph_relationship` (which only removes connections) and `update_graph_node`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this operates on 'deployed graph project' and includes the ⚠️ warning implying destructive, irreversible usage. Lacks explicit 'when to use X instead' sibling comparisons, but the cascade warning implies the tool's specific scope.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_graph_projectDelete Graph ProjectA
Destructive
Inspect

Delete a graph project (removes GitHub repo, K8s deployments, Neo4j database, and credentials)

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true, readOnlyHint=false, and idempotentHint=false. The description adds valuable context beyond annotations by specifying what components get destroyed (GitHub repo, K8s deployments, Neo4j database, credentials), which helps the agent understand the scope of destruction. It doesn't mention rate limits, auth needs, or confirmation prompts, but adds meaningful behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and provides essential context about what gets removed. Every word earns its place with no wasted verbiage or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no output schema, the description provides good context about what gets removed, complementing the annotations. It could be more complete by mentioning irreversible consequences or confirmation requirements, but it's substantially helpful given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with one parameter (project_id) fully documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and resource ('graph project'), and distinguishes it from siblings by specifying what gets removed (GitHub repo, K8s deployments, Neo4j database, credentials). It goes beyond just restating the name/title.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by listing what gets removed, suggesting this is a comprehensive cleanup operation. However, it doesn't explicitly state when to use this vs. alternatives like 'delete_project' or 'rollback_graph_project', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_graph_relationshipDelete Graph RelationshipA
Destructive
Inspect

Delete a specific relationship by its internal ID. Use get_node_relationships to find relationship IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
rel_idYesInternal relationship ID (from get_node_relationships)
rel_typeYesRelationship key
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true, readOnlyHint=false, and idempotentHint=false. Description adds valuable workflow context (IDs must come from get_node_relationships) but doesn't elaborate on mutation effects (e.g., impact on connected nodes, permanent vs. soft delete) beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the operation, second establishes the prerequisite. Front-loaded with essential action and immediately actionable guidance. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Sufficient for a destructive single-resource deletion tool. Mentions critical prerequisite workflow. Annotations cover safety profile (destructive, open-world). No output schema present, but none required for a delete operation. Could benefit from mention of idempotency behavior or error cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description reinforces that rel_id is an 'internal ID' from get_node_relationships, adding workflow context, but doesn't add semantic detail beyond the comprehensive schema descriptions already present.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Delete) + resource (relationship) + scope (by internal ID). Clearly distinguishes from siblings like delete_graph_node or delete_project by specifying this operates on relationship entities using their internal identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite workflow: 'Use get_node_relationships to find relationship IDs.' This establishes a clear dependency chain. Lacks explicit 'when-not-to-use' guidance or alternative deletion methods (e.g., bulk deletion), but names the specific prerequisite tool effectively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_projectDelete ProjectA
Destructive
Inspect

Delete a project (removes GitHub repo, K8s deployments, and database)

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true, readOnlyHint=false, etc., but the description adds valuable context by specifying what gets destroyed (GitHub repo, K8s deployments, database). It doesn't contradict annotations and provides concrete behavioral details beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and details the scope of destruction. Every word earns its place with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (destructive operation) and lack of output schema, the description is somewhat complete but could improve by addressing return values or error conditions. Annotations cover safety aspects, but more behavioral context (e.g., irreversible nature, permissions needed) would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_id' fully documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and resource 'project', specifying it removes GitHub repo, K8s deployments, and database. This distinguishes it from sibling tools like 'delete_graph_project' or 'rename_project' by detailing the comprehensive destruction involved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. While the description implies it's for permanent deletion, it doesn't mention prerequisites, confirmations, or compare to tools like 'rollback_project' or 'delete_graph_project' for partial removals.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deploy_graph_productionDeploy Graph to ProductionAInspect

Promote graph staging to production. Creates a separate production Neo4j instance with its own credentials and database. Requires paid plan.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-destructive, non-idempotent operation with openWorldHint. The description adds valuable context beyond annotations by specifying that it 'Creates a separate production Neo4j instance with its own credentials and database', which clarifies the creation behavior and resource implications not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action in the first sentence, followed by essential details in the second, with no wasted words. Every sentence earns its place by providing critical information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deployment operation with resource creation), annotations cover safety aspects, but there is no output schema. The description adequately explains the action and prerequisites, though it could benefit from mentioning potential outcomes or errors to enhance completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'project_id', which is well-documented in the schema. The description does not add any additional meaning or details about the parameter beyond what the schema provides, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Promote graph staging to production') and resource ('graph'), distinguishing it from siblings like 'deploy_graph_staging' or 'deploy_production' by specifying it creates a separate Neo4j instance with credentials and database.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context by stating 'Requires paid plan' as a prerequisite, which helps guide when to use it. However, it does not explicitly mention when not to use it or name alternatives among siblings, such as 'deploy_graph_staging' for non-production deployments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deploy_graph_stagingDeploy Graph to StagingAInspect

Deploy a graph project to the staging environment. This triggers: (1) Schema validation, (2) Neo4j entity code generation, (3) Docker image build, (4) GitHub commit, (5) Kubernetes deployment with Neo4j instance. The operation is ASYNCHRONOUS — returns immediately with a job_id. Use get_job_status to monitor progress. Deployment typically takes 2-5 minutes. Use get_graph_project_info to verify deployment succeeded.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies that the operation is asynchronous, returns a job_id, and typically takes 2-5 minutes. Annotations cover read/write and idempotency hints, but the description enriches this with practical execution details. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific triggers and usage guidelines in a structured list. Every sentence adds value, with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the deployment process, the description is largely complete: it explains the asynchronous nature, monitoring, and verification steps. However, with no output schema, it could benefit from detailing the return format (e.g., job_id structure), but the guidance to use 'get_job_status' mitigates this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_id' well-documented in the schema. The description does not add meaning beyond the schema, as it doesn't explain parameter usage or constraints. Baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Deploy a graph project to the staging environment') and distinguishes it from sibling tools like 'deploy_graph_production' and 'deploy_production' by specifying the staging environment. It provides a detailed breakdown of what the deployment triggers, making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Deploy a graph project to the staging environment') and provides clear alternatives for monitoring progress ('Use get_job_status to monitor progress') and verification ('Use get_graph_project_info to verify deployment succeeded'). It also distinguishes from production deployment tools among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deploy_productionDeploy to ProductionBInspect

Promote staging to production (requires paid plan)

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (readOnlyHint=false, destructiveHint=false, etc.), and the description adds the paid plan requirement, which is useful context not in annotations. However, it doesn't elaborate on behavioral aspects like what 'promote' entails (e.g., downtime, rollback options) or rate limits, leaving room for more detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and includes a key constraint, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a production deployment), lack of output schema, and rich annotations, the description is minimal but adequate. It covers the main action and a constraint, but could benefit from more detail on outcomes or error handling to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the single parameter (project_id). The description doesn't add any parameter-specific details beyond what's in the schema, meeting the baseline for high coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Promote staging to production') and resource (implied project), making the purpose understandable. However, it doesn't explicitly distinguish this tool from its sibling 'deploy_graph_production' or 'deploy_staging', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite ('requires paid plan'), which provides some context, but offers no guidance on when to use this tool versus alternatives like 'deploy_staging' or 'deploy_graph_production'. There's no explicit when/when-not or comparison to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deploy_stagingDeploy to StagingAInspect

Deploy a project to the staging environment. This triggers: (1) Schema validation, (2) Docker image build, (3) GitHub commit, (4) Kubernetes deployment, (5) Database migrations. The operation is ASYNCHRONOUS - it returns immediately with a job_id. Use get_job_status with the job_id to monitor progress. Deployment typically takes 2-5 minutes depending on schema complexity. If deployment fails, check: (1) Schema format is FLAT (no 'fields' nesting), (2) Every field has a 'type' property, (3) Foreign keys reference existing tables, (4) No PostgreSQL reserved words in table/field names. Use get_project_info to see if the deployment succeeded.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it discloses that the operation is asynchronous (returns a job_id), typical duration (2-5 minutes), failure conditions (schema validation rules), and monitoring requirements. Annotations cover basic hints (e.g., not read-only), but the description enriches this with practical operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: it starts with the core action, lists triggered steps, explains asynchronous behavior and monitoring, provides timing, and outlines failure checks. Every sentence adds value without redundancy, making it front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deployment with multiple steps) and lack of output schema, the description comprehensively covers behavior, usage, monitoring, and failure conditions. It compensates for missing structured output details by explaining the asynchronous response and how to track results, making it complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'project_id' fully documented in the schema as a UUID. The description does not add any additional meaning or context about this parameter beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Deploy a project to the staging environment') and distinguishes it from siblings like 'deploy_production' and 'deploy_graph_staging' by specifying the target environment. It lists the exact sequence of operations triggered, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (for staging deployments) and when to use alternatives: it names 'get_job_status' for monitoring progress and 'get_project_info' to check success. It also lists prerequisites for successful deployment (e.g., schema format rules), offering clear context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fulltext_search_graphFull-Text Search GraphA
Read-onlyIdempotent
Inspect

Search across ALL string properties of ALL nodes in a deployed graph using free-text queries.

Unlike search_graph_nodes (which filters by specific property), this searches every text field at once. Perfect for finding knowledge when you don't know which property contains the answer.

Example: query "quantum" searches name, description, summary, notes, and all other string fields. Returns nodes with _match_fields showing which properties matched.

Optionally filter by entity_type to narrow results.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 50, max: 500)
queryYesSearch text (case-insensitive, min 2 chars)
offsetNoPagination offset (default: 0)
project_idYesProject ID (UUID)
entity_typeNoEntity key to filter by (optional — omit to search all types)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false; description adds valuable behavioral context beyond these hints by disclosing return structure ('Returns nodes with _match_fields showing which properties matched') and clarifying the exhaustive search scope ('every text field at once'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with zero waste: opens with core action, immediately contrasts with sibling, states use case, provides concrete example, and closes with optional filter note. Every sentence earns its place; no redundancy with structured schema/annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 simple parameters (100% schema coverage) and rich annotations, the description achieves completeness by compensating for missing output schema with explicit mention of the '_match_fields' return behavior. Could enhance further by noting result ranking behavior, but adequately covers tool behavior for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, satisfying baseline requirements. Description adds semantic meaning beyond schema by explaining how the query parameter behaves (free-text search across all fields simultaneously) and contextualizing the entity_type filter purpose ('to narrow results'), enhancing agent understanding of parameter interactions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Search' targeting clear resource ('ALL string properties of ALL nodes' in a 'deployed graph'). Explicitly distinguishes from sibling 'search_graph_nodes' by contrasting scope ('filters by specific property' vs 'searches every text field at once'), leaving no ambiguity about which tool to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Perfect for finding knowledge when you don't know which property contains the answer') and identifies the specific sibling alternative by name. Concrete example ('query "quantum" searches name, description...') clarifies invocation pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_data_schemaGet Graph Data SchemaA
Read-onlyIdempotent
Inspect

Get the runtime schema of a DEPLOYED graph project — shows the actual entity types and relationship types available for data operations.

Returns: Available entity keys (for create_graph_node, list_graph_nodes, etc.) and relationship keys (for create_graph_relationship, etc.).

⭐ USE THIS FIRST before creating nodes/relationships to know what entity_type and rel_type values are valid.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive properties. Description adds valuable behavioral context: specifies exactly what gets returned (entity keys and relationship keys), clarifies this is for 'runtime' introspection (not compile-time), and maps return values to specific sibling tool parameters (entity_type and rel_type). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: Sentence 1 defines purpose, Sentence 2 specifies return values, Sentence 3 gives critical usage guidance front-loaded with ⭐ for visibility. Zero redundant words; every sentence earns its place by conveying distinct information not found in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description appropriately explains return content (available keys) and their purpose. Given the complexity of graph schema systems and numerous sibling schema tools (get_graph_schema, get_template_schemas, etc.), description provides sufficient positioning. Could marginally improve by mentioning error behavior (e.g., if project not deployed).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters fully described in schema). Description implies 'DEPLOYED' constraint for project_id and links environment concept to deployment state, but does not add substantial syntax, format, or validation details beyond what the schema already provides. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves the 'runtime schema of a DEPLOYED graph project' using specific verb 'Get' + resource 'runtime schema' + scope 'DEPLOYED'. It effectively distinguishes from siblings like get_graph_schema or get_template_schemas by emphasizing 'runtime' and 'DEPLOYED' (vs static/template schemas).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs 'USE THIS FIRST before creating nodes/relationships' — clear when-to-use with sequential guidance. Implicitly identifies alternatives by naming the downstream tools (create_graph_node, create_graph_relationship) that consume the returned keys, establishing clear workflow ordering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_nodeGet Graph NodeA
Read-onlyIdempotent
Inspect

Get a specific node by its entity_id from a deployed graph project. Returns all node properties including created_at and updated_at timestamps.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idYesThe node's entity_id
project_idYesProject ID (UUID)
entity_typeYesEntity key (e.g., 'person', 'concept')
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/safe behavior. Description adds valuable context about return payload ('all node properties including created_at and updated_at timestamps') and source scope ('deployed graph project'), supplementing structured data without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. Front-loaded with the primary action and target, followed by return value specification. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by describing return values (all properties, timestamps). Adequately scoped for a 4-parameter read operation. Minor gap: doesn't describe error behavior (e.g., node not found) or authentication requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 4 parameters. Description mentions 'entity_id' operationally but doesn't add semantic details (formats, validation rules) beyond what the schema already documents. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get) and resource (node from deployed graph project) with clear scope identifier (by its entity_id). Distinguishes from list/search siblings through specificity of 'specific node' and 'entity_id', though it doesn't explicitly name alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by specifying 'by its entity_id', suggesting use when the exact identifier is known. However, lacks explicit guidance on when to prefer search_graph_nodes or list_graph_nodes instead, or what to do if the ID is unknown.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_project_infoGet Graph Project InfoA
Read-onlyIdempotent
Inspect

Get detailed graph project information including Kubernetes deployment status, Neo4j database health, pod status, and resource usage. Use this after deployment to verify the graph project is running correctly.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations already indicate this is a safe, read-only, idempotent operation, the description specifies that it returns deployment verification information and should be used post-deployment. This provides practical usage context that annotations alone don't convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence clearly states the purpose and enumerates what information is returned. The second sentence provides valuable usage guidance. There's no wasted language or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with comprehensive annotations and a simple single-parameter schema, the description provides good contextual completeness. It explains what information is returned and when to use it. The main gap is the lack of output schema, but the description compensates somewhat by enumerating the types of information returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single required 'project_id' parameter. The description doesn't add any additional parameter semantics beyond what the schema provides, so it meets the baseline expectation without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed graph project information') and enumerates the exact resources returned (Kubernetes deployment status, Neo4j database health, pod status, resource usage). It distinguishes this tool from siblings like 'get_project_info' by specifying it's for 'graph project' information with technical deployment details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('after deployment to verify the graph project is running correctly'), which helps differentiate it from other read operations. However, it doesn't explicitly state when NOT to use it or mention specific alternatives among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_schemaGet Graph SchemaA
Read-onlyIdempotent
Inspect

Get the graph schema definition of a project. Returns the hierarchical schema with nodes (entities) and relationships. Graph schemas define entity hierarchies and typed relationships — a different format than relational flat-table schemas.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, etc., covering safety and idempotency. The description adds context about the return format (hierarchical with nodes/relationships) and clarifies it's graph-based, which is useful beyond annotations but doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, uses two efficient sentences without waste, and each part (e.g., clarifying graph vs. relational) adds necessary context, making it appropriately sized and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema) and rich annotations, the description is mostly complete, covering purpose and format. However, it could slightly improve by mentioning the absence of versioning (vs. 'get_graph_schema_at_version') for full contextual coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_id' well-documented in the schema. The description doesn't add extra parameter details, so it meets the baseline of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'graph schema definition of a project', specifying it returns hierarchical schema with nodes and relationships. It distinguishes from siblings like 'get_graph_data_schema' by emphasizing the graph format versus relational schemas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving graph schemas, with context from 'different format than relational flat-table schemas' suggesting when to use this over relational schema tools. However, it lacks explicit when-not or alternative guidance compared to siblings like 'get_graph_schema_at_version'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_schema_at_versionGet Graph Schema at VersionA
Read-onlyIdempotent
Inspect

Get the graph schema as it existed at a specific version/commit. Use get_graph_version_history to find commit SHAs. Useful for comparing schemas across versions or auditing changes.

ParametersJSON Schema
NameRequiredDescriptionDefault
versionYesCommit SHA of the version to retrieve
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed world assumption. The description adds valuable context by specifying that it retrieves historical data ('as it existed at a specific version'), which isn't covered by annotations. It doesn't contradict annotations and provides additional behavioral insight about version-specific retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first states the core functionality, the second provides usage guidance and context. There's no wasted language, and information is front-loaded with the primary purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (historical schema retrieval), rich annotations covering safety and behavior, and 100% schema coverage, the description is largely complete. It explains the purpose, usage context, and references related tools. The main gap is lack of output schema documentation, but the description compensates reasonably by specifying what's retrieved (graph schema at version).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema. The description adds minimal parameter semantics beyond the schema by mentioning commit SHAs in the context of get_graph_version_history, but doesn't provide additional meaning for project_id or version format details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the graph schema as it existed at a specific version/commit') and distinguishes it from siblings by specifying it retrieves historical schema versions, unlike get_graph_schema which likely gets the current schema. It explicitly names a related tool (get_graph_version_history) for finding commit SHAs, further differentiating its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Useful for comparing schemas across versions or auditing changes') and when to use an alternative ('Use get_graph_version_history to find commit SHAs'). It clearly defines the context for retrieving historical schema data versus current schema or version history lookup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_statisticsGet Graph StatisticsA
Read-onlyIdempotent
Inspect

Get statistics about a deployed graph: total node count, total relationship count, counts per entity type, counts per relationship type. Essential for understanding the current state of a knowledge graph before adding more data.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly, idempotent, non-destructive). The description adds valuable behavioral context: specific statistics returned (counts by entity/relationship type) and that it operates on 'deployed' graphs (aligning with staging/production deployment tools). Does not mention error conditions or pagination, but significantly enriches expected return content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads specific functionality (the four statistics types). Second sentence provides usage rationale. No redundant text or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description enumerates the specific statistics content (node/relationship counts), fulfilling the essential return value documentation. Combined with rich annotations (readOnly, idempotent) and complete input schema, this is sufficient for agent invocation decisions, though structural output details remain unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both parameters. The description mentions 'deployed graph' which contextually supports the 'environment' parameter but does not add parameter-specific syntax, validation rules, or format details beyond the schema. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and clearly identifies the resource ('statistics about a deployed graph'), listing exact metrics returned (node count, relationship count, per-type counts). It distinguishes from siblings by emphasizing this is for 'understanding current state' vs mutation operations like bulk_create or delete.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('Essential for understanding the current state... before adding more data'), implicitly contrasting with creation/update siblings. However, it lacks explicit 'when not to use' or named alternative tools (e.g., doesn't mention when to use get_graph_data_schema instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_template_schemasGet Graph Template SchemasA
Read-onlyIdempotent
Inspect

Get pre-built graph template schemas for common use cases. ⭐ USE THIS FIRST when creating a new graph project! Templates show the CORRECT graph schema format with: proper node definitions (description, flat_labels, schema with flat field definitions), relationship configurations (from, to, cardinality, data_schema), and hierarchical entity nesting. Available templates: Social Network (users, posts, follows), Knowledge Graph (topics, articles, authors), Product Catalog (products, categories, suppliers). You can use these templates directly with create_graph_project or modify them for your needs. TIP: Study these templates to understand the correct graph schema format before creating custom schemas.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable context about what the tool returns (templates with specific components like node definitions, relationship configurations) and practical advice about studying templates. It doesn't contradict annotations and provides useful behavioral information beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded key information ('Get pre-built graph template schemas for common use cases'), followed by specific usage guidance, template examples, and practical tips. Every sentence adds value with zero wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, rich annotations covering safety and behavior, and no output schema, the description provides excellent completeness. It explains what templates contain, gives concrete examples, provides usage sequencing advice, and connects to related operations (create_graph_project), making it fully self-contained for the agent's needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, and instead focuses on what the tool returns (available templates with examples like Social Network, Knowledge Graph, Product Catalog), which adds semantic value about the output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'pre-built graph template schemas', specifying they are for 'common use cases'. It distinguishes from siblings by focusing on templates rather than actual graph operations like creation or deletion, and explicitly mentions it's for understanding schema format before custom work.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'USE THIS FIRST when creating a new graph project!' and provides clear alternatives: 'use these templates directly with create_graph_project or modify them for your needs'. It gives specific context about studying templates before creating custom schemas, making it very clear when to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graph_version_historyGet Graph Version HistoryA
Read-onlyIdempotent
Inspect

Get the deployment and version history for a graph project. Shows all schema changes with commit SHAs, timestamps, version numbers, and messages. Use this to find a specific version for rollback operations.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds valuable context by specifying the type of data returned (schema changes with commit SHAs, timestamps, version numbers, messages) and its purpose for rollback, enhancing understanding beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific details and usage guidance in two efficient sentences, with no wasted words or redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema), the description provides sufficient context by detailing the returned data and usage scenario. However, it could slightly improve by mentioning any limitations (e.g., pagination or date ranges), but it's largely complete for a read-only history tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the single required parameter (project_id as UUID). The description does not add further parameter details, so it meets the baseline of 3 without compensating for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the deployment and version history') and resource ('for a graph project'), distinguishing it from siblings like 'get_graph_schema_at_version' or 'rollback_graph_project' by focusing on comprehensive historical data rather than specific versions or rollback actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('to find a specific version for rollback operations'), providing clear context and linking to an alternative ('rollback_graph_project'), which helps differentiate it from other history-related tools like 'get_version_history'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_job_statusGet Job StatusA
Read-onlyIdempotent
Inspect

Check the status of a deployment job. STATUS VALUES: pending (job queued), running (deployment in progress), completed (success), failed (deployment failed). TIMELINE: Typical deployment takes 2-5 minutes. If status is 'running' for >10 minutes, check get_project_info for detailed pod status. If status is 'failed', use get_project_info to see deployment errors and check schema format (must be FLAT, no 'fields' nesting).

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID returned from deployment operations
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=false, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations, such as typical deployment timeline (2-5 minutes) and specific actions for edge cases (e.g., checking 'get_project_info' for prolonged 'running' status).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with key information (status values and timeline). Each sentence adds value, such as explaining status values and guiding next steps, though it could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation) and rich annotations (covering safety and idempotency), the description is complete. It explains status values, timeline, and fallback actions, providing all necessary context without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'job_id' well-documented in the schema. The description does not add further parameter details beyond what the schema provides, but it implies the parameter's role in checking status, aligning with the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Check') and resource ('status of a deployment job'), making the purpose specific. It distinguishes from siblings like 'get_project_info' by focusing solely on job status rather than project details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (e.g., to check job status) and when to use alternatives (e.g., use 'get_project_info' if status is 'running' for >10 minutes or 'failed' for detailed errors). It provides clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_node_relationshipsGet Node RelationshipsA
Read-onlyIdempotent
Inspect

Get all relationships connected to a specific node. Supports direction filtering (incoming, outgoing, both) and relationship type filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
directionNoFilter: incoming, outgoing, or both (default: both)
entity_idYesThe node's entity_id
project_idYesProject ID (UUID)
entity_typeYesEntity key of the node
environmentNoEnvironment: staging or production (default: staging)
rel_type_filterNoFilter by relationship type (UPPER_SNAKE_CASE)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly=true, destructive=false, and idempotent=true, establishing the safe read-only nature. The description adds specific behavioral details about filtering options (incoming/outgoing/both) and relationship type filtering. However, it omits error behavior (e.g., non-existent node), pagination limits, or result structure details that would help the agent handle responses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. The first sentence establishes the core purpose immediately, while the second adds filtering details. Every word earns its place without repetition of the tool name or title.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and presence of safety annotations, the description covers the essential input requirements. However, lacking an output schema, it should ideally describe return value structure (e.g., list of relationships) or error cases (e.g., 404 for missing node), which are absent for a graph query tool of moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description loosely maps filtering concepts to parameters but adds minimal semantic value beyond the schema, which already documents enum values for direction and the UPPER_SNAKE_CASE format for relationship types. No additional parameter constraints or interdependencies are explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action (Get) and resource (relationships connected to a specific node), including scope details like direction and type filtering. However, it lacks explicit differentiation from the sibling tool 'traverse_graph', which could cause confusion about when to use direct adjacency lookup versus traversal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by documenting filtering capabilities (direction and relationship type), suggesting when to apply specific filter values. However, it lacks explicit guidance on prerequisites (e.g., node must exist), when to prefer this over 'traverse_graph', or when not to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_projectGet Project DetailsB
Read-onlyIdempotent
Inspect

Get detailed information about a specific project

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide clear safety and idempotency hints (readOnlyHint=true, destructiveHint=false, idempotentHint=true). The description adds minimal behavioral context beyond this, as it doesn't disclose rate limits, authentication needs, or specific return format details. However, it doesn't contradict the annotations, so it meets the baseline for tools with good annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with good annotations and a single, well-documented parameter, the description is minimally adequate. However, without an output schema, it doesn't explain what 'detailed information' includes, and it fails to differentiate from sibling tools, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'project_id' parameter fully documented as a UUID. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Given the high schema coverage, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed information about a specific project'), making the purpose unambiguous. However, it doesn't distinguish this tool from sibling tools like 'get_project_info' or 'list_projects', which appear to serve related but potentially different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_project_info' or 'list_projects'. It lacks context about prerequisites, appropriate scenarios, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_project_infoGet Project InfoA
Read-onlyIdempotent
Inspect

Get detailed project info including deployment status and resource usage. DEPLOYMENT STATUS: Running (healthy), Pending (starting), CrashLoopBackOff (init container failed - usually schema format error), ImagePullBackOff (image build failed). TROUBLESHOOTING: If status is CrashLoopBackOff, the schema is likely in wrong format (nested 'fields' key or missing 'type' properties). Use get_schema to review current schema. If replicas show 0/2, the init container (migration runner) is failing. This is almost always a schema format issue.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, non-destructive, and idempotent behavior, which the description doesn't repeat. However, it adds valuable context beyond annotations: detailed status interpretations (e.g., 'CrashLoopBackOff' means init container failed), troubleshooting steps, and common failure causes. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by status details and troubleshooting. While informative, the troubleshooting section is lengthy; some details (e.g., 'nested 'fields' key') could be trimmed for conciseness, but overall it remains well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (returns detailed deployment/resource info) and lack of output schema, the description thoroughly explains what information is returned (status types, troubleshooting insights). With annotations covering safety and idempotency, and schema covering parameters, the description fills all necessary gaps for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'project_id' clearly documented as a UUID. The description adds no parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get detailed project info including deployment status and resource usage.' It specifies the verb ('Get'), resource ('project info'), and key details returned, distinguishing it from siblings like 'get_project' (likely basic info) or 'get_project_usage' (likely just usage metrics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool for troubleshooting: 'If status is CrashLoopBackOff, the schema is likely in wrong format... Use get_schema to review current schema.' It names an alternative tool ('get_schema') and specifies scenarios (e.g., replicas showing 0/2) where this info is critical.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_project_usageGet Project UsageB
Read-onlyIdempotent
Inspect

Get resource usage metrics (CPU, memory) for a project

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description doesn't contradict. It adds minimal context by specifying the metrics (CPU, memory), but doesn't detail rate limits, authentication needs, or response format, leaving gaps despite good annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It front-loads the key action and resource, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with good annotations and no output schema, the description is adequate but minimal. It specifies the metrics (CPU, memory), which adds value, but doesn't cover potential complexities like time ranges, aggregation, or error cases, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter project_id fully documented in the schema. The description doesn't add any parameter details beyond what's in the schema, such as format examples or constraints, so it meets the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('resource usage metrics for a project'), specifying the metrics as CPU and memory. It distinguishes itself from siblings like get_project or get_project_info by focusing on usage metrics, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like get_project_info or get_graph_statistics. The description only states what it does, without context about prerequisites, timing, or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schemaGet Project SchemaA
Read-onlyIdempotent
Inspect

Get the JSON schema definition of a project in FLAT format. Returns the schema structure where each table name maps directly to field definitions. This is the same format required for create_project and update_schema. USE CASES: Review current schema before making updates, copy schema as template for new projects, verify schema structure after deployment, learn the correct schema format by example. The returned schema will be in FLAT format: {table_name: {field_name: {type, properties}}}

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description does not contradict. The description adds valuable context by specifying the exact return format ('FLAT format: {table_name: {field_name: {type, properties}}}') and its use in other operations, enhancing transparency beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with a clear purpose statement followed by usage cases and format details. Every sentence adds value without redundancy, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input, rich annotations covering safety and behavior, and no output schema, the description is complete. It explains the return format, usage scenarios, and compatibility with other tools, providing all necessary context for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the single required 'project_id' parameter. The description does not add any parameter-specific details beyond what the schema provides, so it meets the baseline score of 3 for adequate but no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the JSON schema definition'), resource ('of a project'), and format ('in FLAT format'). It distinguishes from sibling tools like 'get_graph_schema' by specifying the project context and flat format, making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines with a 'USE CASES' section listing scenarios like reviewing before updates, copying as a template, verifying after deployment, and learning by example. It also mentions compatibility with 'create_project' and 'update_schema', guiding when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schema_at_versionGet Schema at VersionB
Read-onlyIdempotent
Inspect

Get the schema as it was at a specific version/commit

ParametersJSON Schema
NameRequiredDescriptionDefault
versionYesCommit SHA of the version
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description adds minimal value. It implies versioned retrieval but does not disclose additional context like error handling, rate limits, or authentication needs. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It front-loads the core purpose ('Get the schema') and adds necessary constraint ('at a specific version/commit'), making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations provide safety and idempotency info, and the schema fully documents parameters, the description is adequate for a read-only tool. However, without an output schema, it lacks details on return values or format, and it does not address sibling tool differentiation, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter documentation in the schema. The description adds no extra meaning about parameters beyond implying version specificity, which is already covered. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('schema'), and specifies the version constraint ('as it was at a specific version/commit'). However, it does not explicitly differentiate from sibling tools like 'get_schema' or 'get_graph_schema_at_version', which reduces clarity in a crowded toolset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_schema' (current version) or 'get_graph_schema_at_version' (graph-specific). It lacks context on prerequisites, typical use cases, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subscription_statusGet Subscription StatusB
Read-onlyIdempotent
Inspect

Get your subscription tier, limits, and usage

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description adds value by specifying what information is retrieved (tier, limits, usage), but does not disclose additional behavioral aspects like rate limits, authentication needs, or response format, which would be helpful given no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It is appropriately sized for a simple tool with no parameters, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. It covers the purpose but lacks details on usage context and behavioral nuances like error handling or return structure, which could improve completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description implies no inputs are needed by stating 'your subscription', aligning with the empty schema, and adds context about the data returned, though it does not detail semantics beyond what the schema absence suggests.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with 'Get your subscription tier, limits, and usage', specifying the verb 'Get' and the resource 'subscription' details. It distinguishes from siblings like 'get_project_usage' by focusing on subscription rather than project usage, though not explicitly contrasting them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention context, prerequisites, or exclusions, leaving it unclear if this is for general status checks or specific scenarios among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_template_schemasGet Template SchemasA
Read-onlyIdempotent
Inspect

Get pre-built template schemas for common use cases. ⭐ USE THIS FIRST when creating a new project! Templates show the CORRECT schema format with: proper FLAT structure (no 'fields' nesting), every field has a 'type' property, foreign key relationships configured correctly, best practices for field naming and types. Available templates: E-commerce (products, orders, customers), Team collaboration (projects, tasks, users), General purpose templates. You can use these templates directly with create_project or modify them for your needs. TIP: Study these templates to understand the correct schema format before creating custom schemas.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a safe, read-only, idempotent operation (readOnlyHint=true, destructiveHint=false, idempotentHint=true). The description adds valuable context beyond annotations: it explains the tool's role in onboarding ('Study these templates to understand the correct schema format'), lists available template categories, and provides practical tips. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidance, template details, and a tip. Every sentence adds value: the first states the purpose, the second emphasizes priority usage, the third explains template benefits, the fourth lists categories, and the fifth provides actionable advice. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, rich annotations (covering safety and idempotency), and no output schema, the description is highly complete. It explains what the tool returns (template schemas with specific characteristics), when to use it, and how to apply the results. The only minor gap is lack of explicit output format details, but annotations and context mitigate this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on the tool's purpose and usage without redundant parameter details, earning a high score for adding semantic value where parameters are absent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get pre-built template schemas') and resources ('for common use cases'), distinguishing it from siblings like 'get_schema' or 'get_graph_schema' by focusing on templates rather than actual project schemas. It explicitly mentions what templates provide (correct schema format, flat structure, field types, foreign keys, best practices).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'USE THIS FIRST when creating a new project!' and suggests alternatives ('use these templates directly with create_project or modify them for your needs'). It also advises studying templates before creating custom schemas, clearly differentiating from other schema-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_user_infoGet User InfoB
Read-onlyIdempotent
Inspect

Get information about the authenticated user

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world). The description adds minimal value by implying authentication is required ('authenticated user'), but doesn't elaborate on rate limits, error conditions, or response format. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point with no wasted words. It's appropriately sized for a simple tool with no parameters, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with good annotations and no parameters, the description is minimally adequate. However, without an output schema, it fails to describe what information is returned (e.g., user details, permissions, metadata), leaving a gap in understanding the tool's full behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description doesn't need to explain parameters, and it correctly implies no inputs are required beyond authentication context, adding appropriate semantic clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get information') and target ('about the authenticated user'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'get_project_info' or 'get_graph_project_info' that follow similar patterns, missing explicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for getting various types of information (project, graph, schema, etc.), there's no indication of context, prerequisites, or exclusions for this specific user-focused tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_version_historyGet Version HistoryA
Read-onlyIdempotent
Inspect

Get the deployment and version history (git commits) for a project. Shows all schema changes with commit SHA, timestamp, and message. USE CASES: Review what changed between deployments, find the last working version before issues started, get commit SHA for rollback_project.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, etc., covering safety. The description adds context about what data is returned (commit SHA, timestamp, message) and implies it's for historical review, which is useful beyond annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

It is front-loaded with the core purpose, followed by specific use cases in a bullet-like format. Every sentence adds value with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage, annotations provided), the description is complete. It explains purpose, usage, and output details adequately, and no output schema is needed here as the description covers return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_id' well-documented in the schema. The description does not add further parameter details, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'deployment and version history (git commits) for a project,' specifying it shows schema changes with commit SHA, timestamp, and message. It distinguishes from siblings like get_project (general info) or get_schema_at_version (specific version).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly lists USE CASES: 'Review what changed between deployments, find the last working version before issues started, get commit SHA for rollback_project,' providing clear when-to-use guidance and linking to the rollback_project sibling tool as an alternative/next step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_graph_nodesList Graph NodesA
Read-onlyIdempotent
Inspect

List nodes of a specific entity type from a deployed graph project. Supports pagination with limit/offset. Returns nodes ordered by creation date (newest first).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 100, max: 1000)
offsetNoPagination offset (default: 0)
project_idYesProject ID (UUID)
entity_typeYesEntity key (e.g., 'person', 'concept')
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations: specifies 'creation date (newest first)' ordering and confirms pagination capabilities. Annotations already establish read-only, idempotent, safe operation. Does not disclose error handling (e.g., invalid entity_type) or rate limits, but provides meaningful behavioral additions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), capabilities (sentence 2), return behavior (sentence 3). Front-loaded with the core action. Every element earns its place; no tautology or redundancy with title/name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Reasonably complete for a listing tool with rich annotations, but lacks return value description (no output schema exists to compensate). Should specify what node fields/structure are returned since output schema is absent. 5 parameters and required fields are well-covered by schema and description working together.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with individual param descriptions. Description adds conceptual value by grouping 'limit/offset' as pagination and contextualizing 'entity_type' as 'specific entity type' and 'project_id' as 'deployed graph project'. This semantic grouping aids agent comprehension beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' with resource 'nodes' and scope 'specific entity type from a deployed graph project'. Distinguishes from 'get_graph_node' (single node) by implying bulk retrieval and from 'search_graph_nodes' by lacking filter/search semantics. However, does not explicitly differentiate from 'search_graph_nodes' or 'fulltext_search_graph' siblings when an agent should choose listing over searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through pagination support ('Supports pagination with limit/offset') and return ordering. However, lacks explicit guidance on when to select this over 'search_graph_nodes' for filtered queries or 'get_graph_node' for single-node retrieval, and does not mention prerequisites like project deployment status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_projectsList ProjectsB
Read-onlyIdempotent
Inspect

List all your RationalBloks projects with their status and URLs

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, and idempotent, so the description doesn't need to repeat safety aspects. It adds context by specifying the output includes 'status and URLs', which hints at the return format. However, it doesn't disclose behavioral traits like pagination, rate limits, or authentication needs, leaving some gaps despite the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List all your RationalBloks projects') and adds useful detail ('with their status and URLs'). There's no wasted text, and it's structured to immediately convey the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity is low (0 parameters, annotations cover safety), the description is adequate but has gaps. It doesn't explain the return format in detail (e.g., structure of the list), and with no output schema, more context on output behavior would help. It's complete enough for a simple list tool but could be improved with additional behavioral details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description doesn't need to explain parameters, and it appropriately doesn't mention any. It adds value by implying the tool returns a list with specific fields (status and URLs), which compensates for the lack of an output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('RationalBloks projects'), specifying what information is included ('status and URLs'). It distinguishes this from other list operations like 'list_graph_nodes' by focusing on projects. However, it doesn't explicitly differentiate from 'get_project' or 'get_project_info', which might retrieve single projects, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'get_project', 'get_project_info', and 'search_graph_nodes', there's no indication that this is for retrieving all projects versus filtered or single-project queries. It lacks explicit when/when-not statements or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rename_projectRename ProjectA
Idempotent
Inspect

Rename a project (changes display name, not project_code)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesNew display name for the project
project_idYesProject ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a non-destructive, idempotent mutation (readOnlyHint: false, destructiveHint: false, idempotentHint: true). The description adds the specific behavioral detail that only the display name changes (not project_code), which is valuable context beyond annotations. However, it doesn't mention permissions, rate limits, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence with zero waste. Every word earns its place: 'Rename a project' establishes purpose, and '(changes display name, not project_code)' adds crucial differentiation. No fluff or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with good annotations (safety profile covered) but no output schema, the description is minimally adequate. It clarifies the scope of change (display name only), which addresses a key ambiguity. However, it lacks information about return values, error conditions, or side effects that would be helpful given the mutation nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter details beyond what the schema provides (e.g., format constraints for 'name' or validation rules). Baseline 3 is appropriate when the schema carries the full parameter documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Rename') and resource ('project'), specifies what changes ('display name'), and distinguishes from siblings by noting it doesn't change 'project_code'. This differentiates it from other project-related tools like update_schema or create_project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description doesn't mention prerequisites, when renaming is appropriate, or how it differs from other update operations like update_graph_node or update_schema. Usage context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rollback_graph_projectRollback Graph ProjectA
Destructive
Inspect

Rollback a graph project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Neo4j data is NOT rolled back. Use get_graph_version_history to find the commit SHA of the version you want to rollback to. After rollback, the graph API will be redeployed with the old schema.

ParametersJSON Schema
NameRequiredDescriptionDefault
versionYesCommit SHA to rollback to
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true and readOnlyHint=false, but the description adds crucial context: it specifies what gets reverted (schema AND code), what doesn't (Neo4j data), and the deployment consequence (API redeployed with old schema). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first states purpose, second gives critical warnings and exclusions, third provides prerequisite and consequence. Front-loaded with the core action and immediate warning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no output schema, the description provides essential context: what changes, what doesn't, prerequisites, and consequences. Could mention error conditions or rollback limitations, but covers the critical aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add parameter-specific semantics beyond what's in the schema, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('rollback'), target resource ('graph project'), and scope ('to a previous version'). It distinguishes from siblings like 'rollback_project' by specifying it's for graph projects, not general projects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Use get_graph_version_history to find the commit SHA') and when not to use ('Neo4j data is NOT rolled back'). It names the specific alternative tool for finding version information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rollback_projectRollback ProjectA
Destructive
Inspect

Rollback a project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Database data is NOT rolled back. Use get_version_history to find the commit SHA of the version you want to rollback to. After rollback, use get_job_status to monitor the redeployment. Rollback is useful when a schema change breaks deployment.

ParametersJSON Schema
NameRequiredDescriptionDefault
versionYesCommit SHA or version to rollback to
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: the warning clarifies what gets reverted (schema and code) and what doesn't (database data), mentions the need to monitor redeployment via get_job_status, and explains the typical use case. Annotations provide safety profile (destructive, not idempotent), but description adds practical implementation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with the core purpose and critical warning, followed by prerequisite and follow-up steps, then use case context. Every sentence serves a distinct purpose: warning, prerequisites, monitoring, and rationale. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive, non-idempotent operation with no output schema, the description provides excellent context: what changes, what doesn't, prerequisites, monitoring steps, and use case. It compensates for the lack of output schema by explaining the monitoring process. Given the complexity and risk level, this is comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by explaining how to obtain the 'version' parameter (via get_version_history) and implies the rollback target environment context, though it doesn't explicitly discuss the optional 'environment' parameter's default behavior beyond what the schema states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('rollback'), target resource ('project'), and scope ('to a previous version'). It distinguishes from siblings by specifying it reverts both schema AND code (unlike rollback_graph_project which likely handles only graph projects). The warning about database data not being rolled back further clarifies the specific behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: use get_version_history to find the commit SHA, use get_job_status to monitor redeployment, and states when rollback is useful ('when a schema change breaks deployment'). It also distinguishes from rollback_graph_project by specifying this is for general projects, not just graph projects.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_graph_nodesSearch Graph NodesA
Read-onlyIdempotent
Inspect

Search for nodes by property values in a deployed graph project.

Supports exact match and contains search (prefix value with ~ for contains).

Examples: Exact: filters: {"name": "Alan Turing"} Contains: filters: {"name": "~turing"} (case-insensitive) Combined: entity_type: "person", filters: {"field": "~physics"}

Without entity_type, searches ALL node types.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 100, max: 1000)
offsetNoPagination offset (default: 0)
filtersYesProperty filters. Prefix value with ~ for contains search.
project_idYesProject ID (UUID)
entity_typeNoEntity key to filter by (optional — omit to search all types)
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish readOnly/idempotent safety, description adds valuable behavioral context: case-insensitive contains search, the ~prefix operator syntax, and implicit AND logic between filters. Would benefit from noting result ordering or max result limits behavior beyond the schema default.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual separation between description and examples. Front-loaded core purpose, followed by operator explanation, then tiered examples (simple → complex). Minor redundancy: 'Search for nodes' restates title, but operator explanation is essential. Examples earn their space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive input coverage: search syntax, filtering operators, entity_type scoping, and cross-parameter interactions (combined filters) are all documented. Only gap is output description — but with 6 parameters and complex nested filters fully explained, and readOnly annotations present, this is acceptable without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds significant value by explaining the ~prefix operator for contains search, which the schema only mentions abstractly ('Prefix value with ~'). The examples provide concrete syntax for the filters object structure that raw schema doesn't convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Search for nodes by property values' clearly states the verb (search), resource (nodes), and method (property values). The deployed graph project scoping distinguishes it from list_graph_nodes which likely lists without filtering. The ~prefix syntax explanation is precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strong contextual guidance through concrete examples showing exact vs contains search patterns and combined filtering. However, lacks explicit comparison to sibling fulltext_search_graph — when should I use property search vs fulltext? The 'Without entity_type, searches ALL' guidance is helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

traverse_graphTraverse GraphA
Read-onlyIdempotent
Inspect

Walk the graph from a starting node, discovering connected knowledge.

Returns all nodes reachable within max_depth hops, with their distance from the start. Essential for exploring knowledge graphs — find related concepts, trace connections, discover clusters.

Example: Start from "Alan Turing", traverse outgoing relationships up to 3 hops deep: start_entity_type: "person" start_entity_id: "alan-turing-001" max_depth: 3 direction: "outgoing"

Supports filtering by relationship types and direction.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 100, max: 1000)
directionNoDirection: outgoing, incoming, or both (default: both)
max_depthNoMaximum traversal depth (default: 3, max: 10)
project_idYesProject ID (UUID)
environmentNoEnvironment: staging or production (default: staging)
start_entity_idYesEntity ID of the starting node
start_entity_typeYesEntity key of the starting node
relationship_typesNoFilter by relationship types (UPPER_SNAKE_CASE). Omit for all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnly/idempotent safety, while the description adds crucial behavioral context: it returns nodes 'with their distance from the start' and supports 'filtering by relationship types and direction.' This explains the output structure and filtering capabilities not evident in the annotations. It does not mention performance characteristics or pagination behavior for deep traversals, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The structure follows an efficient progression: purpose statement → return value → use cases → concrete example → filtering capabilities. Every sentence delivers unique information. The YAML-style example is appropriately formatted and the total length is economical for an 8-parameter tool with complex traversal logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description appropriately explains the return format (reachable nodes with distance). It covers the required parameters and key optional behaviors (filtering, direction). It could be improved by addressing how the 'limit' parameter interacts with depth-first versus breadth-first traversal or result set handling for large graphs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description elevates this by providing a concrete, multi-line example ('Start from "Alan Turing"...') that demonstrates parameter interaction (start_entity_type, start_entity_id, max_depth, direction) and valid value formats, adding practical semantic understanding beyond the schema's type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Walk the graph from a starting node' and clarifies the scope as 'discovering connected knowledge' and finding 'all nodes reachable within max_depth hops.' This clearly distinguishes it from sibling CRUD tools (create_graph_node, delete_graph_relationship) and search tools (search_graph_nodes, fulltext_search_graph) by emphasizing multi-hop traversal versus single-node operations or text matching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies clear use cases: 'Essential for exploring knowledge graphs — find related concepts, trace connections, discover clusters.' However, it lacks explicit contrast with siblings like get_node_relationships (which likely returns immediate neighbors only) or guidance on when traversal depth makes this tool inappropriate compared to other discovery methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_graph_nodeUpdate Graph NodeA
Idempotent
Inspect

Update properties of an existing node in a deployed graph project. Only send the fields you want to change — unspecified fields remain unchanged.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesProperties to update (partial update)
entity_idYesThe node's entity_id
project_idYesProject ID (UUID)
entity_typeYesEntity key (e.g., 'person', 'concept')
environmentNoEnvironment: staging or production (default: staging)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context beyond annotations: specifies partial update semantics ('unspecified fields remain unchanged') and prerequisite state ('deployed graph project'). Complements annotations (idempotentHint, destructiveHint) without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes purpose with scope, second provides essential partial-update guidance. Front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete given rich annotations and 100% schema coverage. Covers operation purpose, partial merge behavior, and deployment context. Absence of output schema means return values need not be explained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all parameters including the partial update nature of 'data'. Description reinforces partial update concept but doesn't add significant semantic detail beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Update' with resource 'node', scope 'properties', and context 'deployed graph project' clearly distinguishes from siblings like create_graph_node, delete_graph_node, and get_graph_node.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explanation of partial update semantics ('Only send the fields you want to change') provides clear usage guidance for the data parameter. Implicitly targets 'existing' nodes distinguishing from create operations, though lacks explicit alternative naming.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_graph_schemaUpdate Graph SchemaA
Idempotent
Inspect

Update a graph project's schema (saves to database, does NOT deploy).

⚠️ Follow ALL rules from create_graph_project: • Must have "nodes" key with at least one entity • Each entity needs "description" and "schema" with field definitions • Each field is {"type": "...", "required": true/false} — required defaults to false • Relationships need "from", "to", and "cardinality" • Field types: string, integer, float, boolean, date, json • Relationship types should be UPPER_SNAKE_CASE • Entity names should be PascalCase

WORKFLOW:

  1. Use get_graph_schema to see current schema

  2. Modify following all rules

  3. Call update_graph_schema (saves only)

  4. Call deploy_graph_staging to apply changes

  5. Monitor with get_job_status

NOTE: This only saves the schema. You MUST call deploy_graph_staging afterwards to deploy.

ParametersJSON Schema
NameRequiredDescriptionDefault
schemaYesNew graph schema with 'nodes' and optionally 'relationships' keys.
project_idYesProject ID (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (mutable, open-world, idempotent, non-destructive), but the description adds valuable context: it specifies that changes are saved but not deployed, outlines a multi-step workflow, and references rules from 'create_graph_project' for schema structure. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (⚠️ rules, WORKFLOW, NOTE), but it includes some redundancy (e.g., repeating deployment notes) and could be more front-loaded. Most sentences earn their place by providing critical workflow and constraint information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with nested objects, no output schema), the description is highly complete. It covers purpose, usage workflow, schema rules, and integration with siblings, providing all necessary context for an agent to invoke it correctly without needing output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds significant meaning by detailing the schema structure requirements (e.g., 'nodes' key, entity rules, field types, relationship formats), which clarifies the 'schema' parameter beyond the schema's generic description. This compensates for the lack of output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Update a graph project's schema') and resource ('graph project'), and explicitly distinguishes it from deployment operations by noting 'saves to database, does NOT deploy.' It differentiates from siblings like 'deploy_graph_staging' by specifying this is a save-only step in a workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow instructions (steps 1-5), including when to use this tool (step 3 after modifying schema) and when not to use it (requires subsequent deployment). It names alternatives like 'get_graph_schema' for viewing and 'deploy_graph_staging' for applying changes, offering clear contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_schemaUpdate SchemaA
Idempotent
Inspect

Update a project's schema (saves to database, does NOT deploy).

⚠️ CRITICAL: Follow ALL rules from create_project: • FLAT format (no 'fields' nesting) • string: MUST have max_length • decimal: MUST have precision + scale • Use "datetime" NOT "timestamp" • DON'T define: id, created_at, updated_at • NEVER create users/customers/employees tables (use app_users)

⚠️ MIGRATION RULES: • New fields MUST be "required": false OR have "default" value • Cannot add required field without default to existing tables • Safe: {new_field: {type: "string", max_length: 100, required: false}}

WORKFLOW:

  1. Use get_schema to see current schema

  2. Modify following ALL rules

  3. Call update_schema (saves only)

  4. Call deploy_staging to apply changes

  5. Monitor with get_job_status

NOTE: This only saves the schema. You MUST call deploy_staging afterwards to apply changes.

ParametersJSON Schema
NameRequiredDescriptionDefault
schemaYesNew JSON schema in FLAT format (table_name → field_name → properties). Every field MUST have a 'type' property.
project_idYesProject ID (UUID)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations, such as critical rules for schema format, migration constraints, and workflow dependencies. Annotations indicate it's not read-only, open-world, idempotent, and non-destructive, but the description elaborates on specific behaviors like saving without deploying and migration safety rules, enhancing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (CRITICAL, MIGRATION RULES, WORKFLOW, NOTE) and uses bullet points for clarity. It is front-loaded with key information but includes some redundancy (e.g., repeating deployment steps), slightly reducing efficiency. Overall, it's appropriately sized and organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, lack of output schema, and rich annotations, the description is highly complete. It covers purpose, usage rules, behavioral details, parameter guidance, and workflow integration, providing all necessary context for an agent to use the tool effectively without needing additional explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3, but the description adds value by detailing the 'schema' parameter's format requirements (e.g., FLAT format, field type rules) and constraints beyond the schema's generic description. This provides practical guidance, though it doesn't fully explain all parameter nuances, warranting a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Update a project's schema' and clarifies it 'saves to database, does NOT deploy', distinguishing it from deployment tools like deploy_staging. It uses specific verbs ('update', 'saves') and resources ('project's schema'), making the purpose clear and distinct from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines, including prerequisites ('Use get_schema to see current schema'), workflow steps (1-5), and when to use alternatives ('You MUST call deploy_staging afterwards to apply changes'). It clearly outlines the tool's role in a multi-step process, distinguishing it from related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.