Skip to main content
Glama

Server Details

Create guides as MCP servers to instruct coding agents to use your software (library, API, etc).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 13 of 13 tools scored. Lowest: 2.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity. Tools are organized around specific resources (projects, data sources, tools/steps) and actions (create, delete, update, list, get, toggle, reorder), making it easy for an agent to select the correct one. For example, 'add_data_source' and 'delete_data_source' are clearly complementary operations on data sources, with no overlap with project or tool management tools.

Naming Consistency5/5

Tool names follow a highly consistent verb_noun pattern throughout. All tools use snake_case with clear verbs like add, create, delete, get, list, reorder, toggle, and update paired with specific nouns (e.g., data_source, project, tool). There are no deviations in style or convention, making the set predictable and readable.

Tool Count5/5

With 13 tools, the count is well-scoped for managing projects, data sources, and tools/steps in an anonymous session. Each tool earns its place by covering essential CRUD operations and lifecycle management (e.g., create, update, delete, list, get, toggle, reorder), without being excessive or thin for the domain.

Completeness5/5

The tool surface provides complete CRUD/lifecycle coverage for the domain of project management with data sources and tools. It includes creation (create_project, create_session, add_data_source, add_tool), retrieval (get_project_details, list_projects), update (update_project, update_tool, toggle_tool, reorder_steps), and deletion (delete_project, delete_data_source, delete_tool), with no obvious gaps that would cause agent failures.

Available Tools

13 tools
add_data_sourceCInspect

Add a new data source to a project

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the data source
typeYesThe type of data source
session_idYesThe anonymous session ID
project_slugYesThe slug of the project to add the data source to
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an 'Add' operation (implying mutation/creation) but doesn't cover permissions needed, whether it's idempotent, rate limits, error conditions, or what happens on success (e.g., returns a data source ID). For a mutation tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without redundancy. It's front-loaded with the core action and resource, making it easy to parse. Every word earns its place with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 4 required parameters and no annotations or output schema, the description is insufficient. It doesn't explain what 'adding a data source' entails operationally, what values are valid for the 'type' enum beyond what's in the schema, or what the tool returns. The agent lacks context for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description adds no additional parameter semantics beyond implying 'project_slug' identifies the target project and 'url'/'type' define the data source. This meets the baseline for high schema coverage but doesn't enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Add') and resource ('data source to a project'), making the purpose immediately understandable. It distinguishes from siblings like 'delete_data_source' by specifying creation rather than removal. However, it doesn't fully differentiate from 'create_project' or 'update_project' in terms of project modification scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing project), exclusions, or relationships with sibling tools like 'delete_data_source' or 'get_project_details'. The agent must infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_toolCInspect

Add a new tool/step to a project

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe unique name of the tool (used as identifier)
tool_typeNoThe type of tool (step, builtin, tip)step
is_enabledNoWhether the tool is enabled
session_idYesThe anonymous session ID
step_orderNoThe order of this step in the project workflow
descriptionNoDescription of what the tool does
tool_schemaNoJSON schema for tool parameters
display_nameYesThe human-readable display name of the tool
project_slugYesThe slug of the project to add the tool to
response_templateNoMarkdown template for tool response
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'add' implies a creation/mutation operation, the description doesn't address permissions needed, whether this is idempotent, what happens on conflicts, or what the response contains. For a tool with 10 parameters and no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without unnecessary words. It's appropriately sized for what it communicates, though what it communicates is limited in scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 10 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what happens after the tool is added, what validation occurs, whether there are constraints on tool names, or what the response looks like. The context demands more completeness than this minimal description provides.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so each parameter is documented in the schema itself. The description adds no additional parameter information beyond the basic action. This meets the baseline of 3 for high schema coverage, but doesn't provide any extra semantic context about how parameters interact or their significance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('add') and the resource ('new tool/step to a project'), making the purpose understandable. However, it doesn't differentiate this tool from its siblings like 'create_project' or 'update_tool' - it just describes what it does without comparative context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'create_project', 'update_tool', 'delete_tool', and 'reorder_steps', there's no indication of when this specific 'add' operation is appropriate versus those other operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_projectCInspect

Create a new project in the anonymous session

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe display name of the project
slugYesThe URL slug for the project (used in URLs)
is_publicNoWhether the project is public or private
session_idYesThe anonymous session ID
overview_contentNoMarkdown content for the project overview
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers limited behavioral insight. It mentions 'anonymous session' context but doesn't disclose permissions, rate limits, side effects (e.g., project visibility), or response format. For a creation tool with zero annotation coverage, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It front-loads the core action and context without unnecessary details, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a creation tool with 5 parameters. It lacks details on behavioral aspects (e.g., what happens post-creation), error handling, or return values, leaving the agent with insufficient context to use it effectively beyond basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional parameter semantics beyond implying 'session_id' ties to 'anonymous session'. It doesn't explain relationships between parameters (e.g., 'slug' vs. 'name') or usage nuances, relying entirely on the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and resource ('new project'), specifying it occurs 'in the anonymous session'. It distinguishes from siblings like 'update_project' or 'list_projects' by focusing on creation. However, it doesn't explicitly differentiate from 'create_session' or explain what a 'project' entails beyond the context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance, implying usage when a new project is needed in an anonymous session. It lacks explicit when-to-use rules, prerequisites (e.g., session must exist), or alternatives (e.g., vs. 'update_project' for modifications). No exclusions or comparisons to siblings are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_sessionBInspect

Create a new anonymous session for managing projects

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this creates something ('Create a new anonymous session') which implies a write operation, but doesn't clarify authentication requirements, session lifespan, what 'managing projects' entails, or potential side effects. The description is too vague for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's appropriately sized for a zero-parameter tool and gets straight to the point without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool that creates sessions (implying stateful operations) with no annotations and no output schema, the description is insufficient. It doesn't explain what a session provides, how it's used with other tools, what 'anonymous' means in practice, or what the expected outcome is. The agent would struggle to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline score of 4 for not adding unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and resource ('new anonymous session for managing projects'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'create_project' or explain what distinguishes a 'session' from a 'project' in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'create_project' or 'list_projects'. It doesn't mention prerequisites, timing considerations, or what happens after session creation, leaving the agent with minimal usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_data_sourceCInspect

Delete a data source from a project

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesThe anonymous session ID
data_source_idYesThe ID of the data source to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a data source, implying a destructive operation, but doesn't clarify if this is reversible, requires specific permissions, has side effects (e.g., on dependent tools), or provides confirmation feedback. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—it directly states the tool's action and target without unnecessary words. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a destructive operation with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't address behavioral aspects like safety, permissions, or response format, which are critical for an agent to invoke this tool correctly and safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (session_id and data_source_id). The description adds no additional meaning beyond what the schema provides, such as explaining the relationship between session and data source or deletion constraints. Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a data source from a project'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'delete_project' or 'delete_tool', which would require specifying what distinguishes data source deletion from other deletion operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid session or project), exclusions, or relationships with sibling tools like 'add_data_source' or 'delete_project', leaving the agent without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_projectCInspect

Delete a project from the anonymous session

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesThe anonymous session ID
project_slugYesThe slug of the project to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action is 'Delete', implying a destructive mutation, but doesn't specify if this is irreversible, requires specific permissions, or has side effects like cascading deletions. This leaves significant gaps for a tool with clear destructive intent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It is front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It lacks critical details like whether the deletion is permanent, what happens to associated data, error conditions, or confirmation requirements, leaving the agent with insufficient context for safe operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents both parameters ('session_id' and 'project_slug') adequately. The description adds no additional meaning beyond implying these parameters are used for deletion, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a project from the anonymous session'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'delete_data_source' or 'delete_tool', but the resource specificity is adequate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'delete_data_source' or 'delete_tool', nor does it mention prerequisites such as needing an existing session or project. The description only restates the basic function without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_toolCInspect

Delete a tool from a project

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_idYesThe ID of the tool to delete
session_idYesThe anonymous session ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Delete') but doesn't clarify critical aspects such as whether deletion is permanent, requires specific permissions, affects related data, or has side effects. This is inadequate for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral traits (e.g., permanence, permissions), expected outcomes, or error handling, leaving significant gaps in understanding how to use it safely and effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for both parameters ('tool_id' and 'session_id'). The description doesn't add any meaning beyond what the schema provides, such as explaining parameter relationships or usage context, but the schema's completeness justifies the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a tool from a project'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'delete_data_source' or 'delete_project', which follow similar patterns but target different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing a valid session or existing tool), exclusions, or related operations like 'toggle_tool' or 'update_tool' that might be alternatives for modifying tool states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_project_detailsCInspect

Get detailed information about a specific project in the session

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesThe anonymous session ID
project_slugYesThe slug of the project to get details for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states it 'gets' information, implying a read-only operation without confirming safety, permissions, or response behavior. It misses details like whether it requires authentication, handles errors, or returns structured data, which are critical for a tool with undocumented output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without fluff. It is appropriately sized and front-loaded, though it could be slightly more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool that retrieves 'detailed information'. It fails to explain what details are returned, error handling, or behavioral traits, leaving significant gaps in understanding how to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters. The description adds no extra meaning beyond implying parameters identify a project in a session, aligning with the baseline for high schema coverage without enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed information about a specific project'), making the purpose understandable. However, it does not explicitly differentiate from sibling tools like 'list_projects' or 'update_project', which would require mentioning it retrieves read-only details rather than listing or modifying.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by specifying 'in the session', but it lacks explicit when-to-use instructions, alternatives (e.g., vs. 'list_projects'), or exclusions. No prerequisites or contextual cues are given, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_projectsCInspect

List all projects in the anonymous session

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesThe anonymous session ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool lists projects but doesn't disclose behavioral traits such as pagination, rate limits, authentication needs (implied by 'anonymous session' but not explicit), or what 'list all' entails (e.g., format, limits). This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero waste—it directly states the tool's purpose without redundancy. It's appropriately sized and front-loaded, making it efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema, the description is incomplete. It lacks details on return values (what 'list' outputs), error handling, or behavioral context. For a tool with minimal structured data, more guidance is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'session_id' fully described in the schema. The description adds no additional meaning beyond implying the session is anonymous, which is already covered. Baseline is 3 since the schema does the heavy lifting, and no extra param info is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('projects'), specifying the scope ('in the anonymous session'). It distinguishes from siblings like 'get_project_details' by indicating it lists all projects rather than retrieving details of one. However, it doesn't explicitly differentiate from other list-like operations that might exist, keeping it at 4.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by implying usage when needing to list projects in an anonymous session, but it lacks explicit when-to-use instructions, alternatives (e.g., vs. 'get_project_details'), or prerequisites. No context on when not to use it is given, making it basic.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reorder_stepsCInspect

Reorder tools/steps in a project

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYesThe anonymous session ID
tool_ordersYesArray of tool IDs and their new step orders
project_slugYesThe slug of the project to reorder tools in
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Reorder' implies a mutation operation, but the description doesn't state whether this requires specific permissions, how conflicts are handled (e.g., duplicate step orders), what happens on success/failure, or if it's idempotent. For a mutation tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. There is zero wasted verbiage, and it directly addresses what the tool does without unnecessary elaboration. This makes it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a mutation tool with no annotations and no output schema, the description is insufficient. It lacks behavioral details (e.g., error handling, side effects), usage context, and output expectations. While the schema covers parameters, the overall context for safe and correct invocation is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the three parameters (session_id, project_slug, tool_orders). The description adds no additional meaning beyond what's in the schema—it doesn't explain the relationship between 'tools' and 'steps', the format of tool IDs, or how step orders are interpreted. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('reorder') and the resource ('tools/steps in a project'), making the purpose immediately understandable. It distinguishes itself from siblings like 'add_tool' or 'delete_tool' by focusing on reordering rather than adding/removing. However, it doesn't specify whether 'tools/steps' refers to a specific workflow or sequence, leaving some ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., existing tools/steps to reorder), exclusions, or how it relates to siblings like 'update_tool' or 'toggle_tool'. Without context, the agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

toggle_toolCInspect

Toggle a tool's enabled/disabled state

ParametersJSON Schema
NameRequiredDescriptionDefault
tool_idYesThe ID of the tool to toggle
session_idYesThe anonymous session ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'toggle' implies a mutation operation, the description doesn't specify permissions required, whether the change is reversible, side effects, or response behavior. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and target, making it easy to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., idempotency, error conditions), output expectations, or how it fits within the broader tool management context provided by sibling tools, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (tool_id and session_id). The description adds no additional meaning beyond what the schema provides, such as explaining what 'toggle' entails for the tool state or how session_id relates to the operation. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('toggle') and target ('a tool's enabled/disabled state'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'add_tool', 'delete_tool', or 'update_tool', which also manage tools but perform different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing tool to toggle), exclusions, or comparisons to siblings like 'update_tool' that might modify tool states differently, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_projectCInspect

Update an existing project in the anonymous session

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoThe display name of the project
slugNoThe URL slug for the project (used in URLs)
is_publicNoWhether the project is public or private
session_idYesThe anonymous session ID
project_slugYesThe slug of the project to update
overview_contentNoMarkdown content for the project overview
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the 'anonymous session' context but doesn't explain what that entails (e.g., authentication needs, session lifetime, or mutation effects). For a tool that updates data, this lack of detail on permissions, side effects, or error handling is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and context without unnecessary words. Every part of the sentence serves a purpose, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 6 parameters and no annotations or output schema, the description is inadequate. It lacks details on what happens during updates (e.g., partial vs. full updates, validation rules, or response format), leaving critical behavioral aspects unspecified for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what's already in the schema, which has 100% coverage with clear descriptions for all 6 parameters. This meets the baseline for high schema coverage, but doesn't provide additional context like default behaviors or interdependencies between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update') and resource ('an existing project'), and specifies the context ('in the anonymous session'). However, it doesn't differentiate this from sibling tools like 'update_tool' or explain what distinguishes project updates from other operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal context about when to use this tool by mentioning 'anonymous session,' but offers no guidance on when to choose this over alternatives like 'create_project' or 'delete_project,' nor does it mention prerequisites or constraints beyond the session context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_toolCInspect

Update an existing tool

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoThe unique name of the tool (used as identifier)
tool_idYesThe ID of the tool to update
tool_typeNoThe type of tool (step, builtin, tip)
is_enabledNoWhether the tool is enabled
session_idYesThe anonymous session ID
step_orderNoThe order of this step in the project workflow
descriptionNoDescription of what the tool does
tool_schemaNoJSON schema for tool parameters
display_nameNoThe human-readable display name of the tool
response_templateNoMarkdown template for tool response
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. 'Update an existing tool' implies a mutation operation, but the description fails to mention what permissions are required, whether changes are reversible, what happens to unspecified fields, or any rate limits or side effects. For a mutation tool with 10 parameters and no annotation coverage, this is a significant gap in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence with zero wasted words. It is appropriately sized for what it communicates (though what it communicates is minimal). There is no unnecessary elaboration or structural issues.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, mutation operation), lack of annotations, and absence of an output schema, the description is severely incomplete. It doesn't explain what fields can be updated, what the response looks like, or any behavioral context. For a tool that modifies existing resources with multiple parameters, this description provides inadequate context for proper agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all 10 parameters are documented in the input schema. The description adds no additional parameter information beyond what's already in the structured schema. According to the scoring rules, when schema coverage is high (>80%), the baseline score is 3 even with no parameter information in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Update an existing tool' is a tautology that merely restates the tool name 'update_tool'. It provides no specific information about what aspects of a tool can be updated, what resources are involved, or how this differs from sibling tools like 'toggle_tool' or 'add_tool'. The description lacks any meaningful elaboration beyond the obvious.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides absolutely no guidance about when to use this tool versus alternatives. There is no mention of prerequisites (e.g., needing an existing tool ID), when this should be used instead of 'toggle_tool' or 'delete_tool', or any context about appropriate scenarios. The agent receives zero usage direction from this description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources