Skip to main content
Glama
Ownership verified

Server Details

Markdown-based note-taking with a hosted MCP server. Your notes serve you and your AI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
hjarni/hjarni-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 24 of 24 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but some potential confusion exists between 'notes-list' and 'search' (both retrieve notes with filtering) and 'files-attach' vs 'files-attach_from_url' vs 'files-create_upload_url' (all for file attachment with overlapping functionality). Descriptions help clarify, but the boundaries could be sharper.

Naming Consistency5/5

Tool names follow a highly consistent pattern of resource-action (e.g., containers-create, notes-get, files-remove) with uniform snake_case throughout. The prefixing (e.g., containers-, notes-, files-) clearly groups related tools, making the set predictable and easy to navigate.

Tool Count4/5

With 24 tools, the count is on the higher side but reasonable for a comprehensive 'Second Brain' note-taking system covering containers, notes, files, tags, teams, and instructions. It feels slightly heavy but not excessive, as each tool serves a specific operational need in the domain.

Completeness5/5

The toolset provides complete CRUD/lifecycle coverage for all core resources (containers, notes, files, tags, teams, instructions), including search, linking, and dashboard overviews. No obvious gaps exist; agents can perform all essential operations without dead ends in the note-management workflow.

Available Tools

24 tools
containers-createAInspect

Create a new container (folder) for organizing notes. Required: name (string). Optional: description (string), parent_id (integer) for nesting inside another container. After creating, consider setting up LLM instructions with instructions-update.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesContainer name (required)
parent_idNoParent container ID for nesting
descriptionNoContainer description
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is a non-destructive write operation. The description adds the domain context (note organization) but does not disclose additional behavioral traits like nesting constraints, uniqueness requirements, or success responses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. The parenthetical '(folder)' efficiently clarifies the container concept, and 'organizing notes' immediately establishes domain relevance without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for the tool's complexity (simple create with 3 params, no output schema). Leverages complete annotations and schema effectively, though it could explicitly mention hierarchical capabilities implied by the parent_id parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all parameters. The description adds the conceptual metaphor ('folder') which supports understanding the parent_id nesting parameter, but does not add syntax or constraint details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create'), resource ('container (folder)'), and domain context ('for organizing notes'), effectively distinguishing it from sibling tools like containers-update or notes-create.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear domain context ('organizing notes') that helps identify when to use this tool versus content creation tools, though it lacks explicit contrasts with containers-update (modification) or specific prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

containers-getA
Read-onlyIdempotent
Inspect

Get a single container by ID, including notes_count, children_count, and LLM instructions if set. Optional: include_tree (boolean) to also get ancestor chain and children. Required: id (integer).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesContainer ID (required)
include_treeNoInclude ancestors and children arrays (default: false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With annotations already declaring readOnlyHint=true, the description adds valuable behavioral context: it discloses that LLM instructions are included only 'if set' (conditional presence) and clarifies that include_tree returns 'ancestors and children' (structural relationships). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes purpose and key data fields, the second provides parameter guidance. Information is front-loaded with the core action, and every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single resource retrieval), 100% schema coverage, and read-only annotations, the description adequately covers what is returned (container, LLM instructions, optional tree). It appropriately omits error handling details given the lack of output schema, though mentioning 'not found' behavior would elevate this to a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage (baseline 3), the description adds semantic value by explaining the functional purpose of include_tree ('to also get ancestors and children') rather than just restating the schema's technical description. This helps the agent understand the data structure implications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get), resource (single container), and identification method (by ID). It distinguishes from siblings containers-list (plural/enumeration), containers-create (write), and containers-update (mutation) through the singular 'get' verb and ID-based targeting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the optional include_tree parameter ('to also get ancestors and children'). However, it lacks explicit guidance on when to choose this tool versus containers-list (e.g., 'use this when you have a container ID; use containers-list to browse').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

containers-listA
Read-onlyIdempotent
Inspect

List containers (folders) for organizing notes. Each container includes notes_count and children_count. Optional: team_id (integer) for team containers, scope ('roots' default|'all'|'archived'), page, per_page. Shared containers are automatically included when listing root-level personal containers.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
scopeNoFilter scope (default: roots). 'archived' only for personal containers.
team_idNoList containers in this team instead of personal containers
per_pageNoResults per page
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only safety. Description adds valuable behavioral context about default scoping (root-level only) and the personal/team context switch that isn't captured in annotations. Could add more about pagination or rate limits, but covers primary behavioral axes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes purpose; second sentence delivers critical default behavior and parameter guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a list tool with 4 parameters and safety annotations. Covers primary filtering behaviors (roots vs team). Lacks explicit mention of pagination behavior or return structure, but schema covers parameter details adequately for an agent to proceed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds semantic context by linking team_id to the personal-vs-team behavioral shift and reinforcing the default state when parameters are omitted. Elevates understanding beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (List) and resource (containers/folders) with context (organizing notes). Clarifies personal vs team scope which implicitly distinguishes from sibling operations, though it doesn't explicitly contrast with containers-get for single-item retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on default behavior (root-level personal containers) and when to use team_id parameter (for team containers). Lacks explicit 'when not to use' or direct sibling comparisons, but effectively guides the primary use case selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

containers-updateA
DestructiveIdempotent
Inspect

Update an existing container — rename, change description, move to a different parent, or set display position. Required: id (integer). Optional: name, description, parent_id (null for root), position (integer, lower = first).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesContainer ID (required)
nameNoNew name
positionNoDisplay order position (lower numbers appear first)
parent_idNoNew parent container ID (null for root)
descriptionNoNew description
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=false and destructiveHint=false. The description adds the specific mutable fields/behaviors beyond the generic 'update' label, but does not disclose behavioral details like partial update semantics (only 'id' is required), validation rules, or side effects of reparenting (e.g., circular reference handling).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the core action. The parenthetical list efficiently maps to the optional parameters without redundancy. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and present annotations, the description adequately covers the tool's purpose for a 5-parameter mutation operation. However, it could improve by noting the partial update capability (since only 'id' is required) or referencing error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps parenthetical operations to parameters (rename→name, move parent→parent_id, etc.) but adds no semantic meaning beyond what the schema already provides (e.g., no constraints on name length or position integer limits).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Update' and resource 'container'. The parenthetical enumeration of operations (rename, change description, move parent, set position) clearly distinguishes this from sibling tools like containers-create or containers-get, and clarifies the scope of modifications possible.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by specifying 'existing container', which distinguishes it from containers-create. However, it lacks explicit when-to-use guidance (e.g., 'use this for partial modifications, not for creating new containers') or prerequisites (e.g., 'requires container ID from containers-list').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dashboard-getA
Read-onlyIdempotent
Inspect

Get an overview of the Second Brain: counts of notes, containers, tags, inbox items, and the 5 most recently updated notes. No parameters required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already confirm readOnlyHint=true and destructiveHint=false. The description adds value by disclosing what specific data points are returned (counts vs recent items), but does not address caching behavior, performance characteristics, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence structure with zero redundancy. The colon effectively separates the high-level action from the specific data details. Every word earns its place; no filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameter-less read-only tool, the description adequately covers the return value semantics by listing the specific metrics included (counts and recent notes). While an output schema would be ideal, the textual description provides sufficient context for an agent to understand what data structure to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage (empty object), the baseline score applies. The description does not need to explain input parameters, but helpfully enumerates the output data categories (counts, recent notes) which aids in understanding the tool's utility despite having no inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get an overview') and enumerates exact resources retrieved (counts of notes, containers, tags, inbox items, recent notes). It clearly distinguishes this aggregate dashboard view from sibling CRUD tools like notes-list or containers-get.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage through the term 'overview' and listing aggregate metrics, it lacks explicit guidance on when to prefer this over specific entity retrieval tools (e.g., notes-list). No 'when-not-to-use' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-attachAInspect

Attach a file to a note via base64-encoded data. Prefer files-create_upload_url for large files to save tokens. Required: note_id (integer), filename (string), data (base64 string). Optional: content_type (MIME type, default: application/octet-stream), description.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesBase64-encoded file contents (required)
note_idYesNote ID (required)
filenameYesFilename (e.g. report.pdf) (required)
descriptionNoOptional file description
content_typeNoMIME type (e.g. application/pdf). Defaults to application/octet-stream
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=false (write operation) and destructiveHint=false. The description adds the critical behavioral detail that data must be base64-encoded, which is essential for correct invocation. However, it omits other behavioral aspects like error handling, size limits, or return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at two sentences. The first establishes the operation, the second specifies the encoding format—both essential. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for the input side given complete schema coverage and annotations. However, no output schema exists, and the description fails to indicate what the tool returns (e.g., attachment ID, success boolean), leaving a gap in the contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with all parameters fully documented (e.g., 'Base64-encoded file contents' for the data field). The description reinforces the base64 requirement but does not add semantic meaning beyond what the schema already provides, meriting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Attach'), resource ('file'), and target ('note'). The mention of 'base64-encoded string' implicitly distinguishes this from the sibling 'files-attach_from_url', clarifying this is for direct content upload.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use this tool (when file contents are available as base64 data) versus URL-based alternatives. While it doesn't explicitly name the sibling alternative, the encoding requirement provides strong implicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-attach_from_urlAInspect

Fetch a file from a public URL and attach it to a note. Follows one redirect. Required: note_id (integer), url (string). Optional: filename (default: derived from URL), content_type (default: from HTTP response), description.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to fetch the file from (required)
note_idYesNote ID (required)
filenameNoOverride filename (default: derived from URL)
descriptionNoOptional file description
content_typeNoOverride MIME type (default: from HTTP response)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral detail 'Follows one redirect' which is critical for network operations and not present in annotations. Annotations already cover safety profile (readOnly=false, destructive=false), so the description successfully augments with execution specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Main action is front-loaded, followed by specific behavioral constraint. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given complete schema and annotations, description adequately covers the core operation and redirect behavior. Minor gap: lacks error handling context (what happens if URL is unreachable or note ID invalid), but sufficient for basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, so parameters are fully documented in structured fields. Description does not add semantic nuances beyond the schema (e.g., no examples or format details for URL), warranting baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action (fetch from URL + attach to note) and distinguishes from sibling 'files-attach' by explicitly mentioning the URL source. Verb and resource are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage scenario through 'from a URL' but lacks explicit guidance on when to use this versus siblings like 'files-attach' (likely for existing files) or 'files-create_upload_url'. No exclusion criteria provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-check_uploadA
Read-onlyIdempotent
Inspect

Check the status of a file upload created by files-create_upload_url. Returns status: 'pending' (not uploaded yet), 'completed' (file attached, includes file metadata), or 'expired' (link timed out). Required: token (string, from files-create_upload_url response).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesUpload token from files-create_upload_url response (required)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true (safe read operation). Description adds valuable behavioral context by enumerating the three specific status values ('pending', 'completed', 'expired') and explaining what each state means in the upload lifecycle, which is not inferable from annotations alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste: first establishes purpose and sibling relationship, second documents return values (compensating for missing output schema). Information is front-loaded and dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter status-check tool with no output schema, the description is complete. It documents the three possible return states and their meanings, providing sufficient information for an agent to interpret results without requiring an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the 'token' parameter ('Upload token from files-create_upload_url response'). Description does not add parameter-specific semantics beyond what the schema already provides, which is acceptable given full schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Check' with resource 'status of a file upload link' and explicitly scopes it to links 'created by files-create_upload_url', clearly distinguishing it from sibling tools like files-create_upload_url or files-attach.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Establishes clear workflow relationship by referencing the sibling tool files-create_upload_url, indicating this is the second step in a two-step upload process. Lacks explicit 'when not to use' guidance, but the dependency implication is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-create_upload_urlAInspect

Generate a one-time upload URL for attaching a file to a note. Share this URL with the user so they can upload directly in their browser — saves tokens by avoiding base64 encoding. The link expires after 30 minutes. Use files-check_upload to verify completion. Required: note_id (integer). Optional: description.

ParametersJSON Schema
NameRequiredDescriptionDefault
note_idYesNote ID to attach the file to (required)
descriptionNoOptional file description
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite annotations indicating readOnlyHint=false (implying state change), the description adds critical behavioral context not present in structured fields: the URL is 'one-time,' it 'bypasses the conversation,' and has a specific '30 minutes' TTL. These operational details are essential for correct invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently cover: (1) core function, (2) user workflow/value prop, (3) time constraint, and (4) companion tool. No redundancy; front-loaded with essential action; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately explains the return value's purpose (URL for user upload) and lifecycle (30-min expiry). However, it could briefly characterize the return structure (e.g., 'returns a URL string') for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'attaching a file to a note' which aligns with the 'note_id' parameter, but adds no additional semantic detail (format constraints, example values) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Generate') and resource ('one-time upload URL'), clearly defining the tool's function. It distinguishes itself from sibling tools like 'files-attach' by emphasizing this is for direct user upload bypassing the conversation, and explicitly references 'files-check_upload' for verification workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance ('Share this URL with the user'), explains the benefit ('saving tokens'), notes the temporal constraint ('expires after 30 minutes'), and names a specific sibling tool ('Use files-check_upload') for the next step. This creates a complete decision tree for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-get_download_urlA
Read-onlyIdempotent
Inspect

Get a temporary download URL for a file attached to a note. Share the URL with the user to download in their browser. URL expires after a few minutes. Required: note_id (integer), file_id (integer, from notes-get response).

ParametersJSON Schema
NameRequiredDescriptionDefault
file_idYesFile ID from notes-get response (required)
note_idYesNote ID (required)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, confirming the safe read-only nature. The description adds crucial behavioral context not present in annotations: the URL is temporary and expires after a few minutes. This discloses an important operational constraint (time-limited validity) that affects how the agent must handle the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: first states purpose, second provides usage instruction, third states operational constraints. Every sentence earns its place with zero redundancy. Information is front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool without an output schema, the description adequately explains what the tool returns (a URL), how to use the result (share with user), and critical behavioral constraints (temporary/expiring). Given the low complexity and absence of output schema, no additional context is required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Note ID' and 'File ID (from notes-get response)'), the schema fully documents parameters. The description references 'a file attached to a note' which aligns with the parameters but adds no additional semantic detail about the parameters themselves beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the specific action (get a temporary download URL) and resource (file attached to a note), using clear verbs. It effectively distinguishes from sibling tools like files-attach or files-create_upload_url by specifying this generates a download (not upload) URL for existing attachments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance by instructing to 'Share the URL with the user so they can download the file in their browser' and warns that 'The URL expires after a few minutes.' While it lacks explicit 'when not to use' statements comparing against all file-related siblings, the expiration constraint provides actionable temporal guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

files-removeA
Destructive
Inspect

Permanently remove a file attachment from a note. This action is irreversible. Required: note_id (integer), file_id (integer, from notes-get response).

ParametersJSON Schema
NameRequiredDescriptionDefault
file_idYesFile ID from notes-get response (required)
note_idYesNote ID (required)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare destructiveHint=true and readOnlyHint=false, establishing the safety profile. The description confirms the destructive nature with 'Remove' but fails to clarify scope: whether the file is permanently deleted from storage or merely detached from the note. No additional behavioral traits (auth, rate limits, side effects) are disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 7 words with zero redundancy. Information is front-loaded and immediately actionable. The brevity is appropriate for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple two-parameter operation, complete schema coverage, and presence of safety annotations, the description is nearly sufficient. Minor gap: does not clarify whether the operation deletes the file entirely or just removes the attachment reference, which would be helpful for an agent determining data integrity implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with parameters fully documented (including the helpful hint that file_id comes from notes-get response). The description reinforces the relationship between parameters ('file attachment from a note') but does not add semantic meaning beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Remove' with clear resource 'file attachment' and context 'from a note'. It effectively distinguishes from sibling tools like files-attach (which adds files) and notes-delete (which would delete the entire note).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the action verb, but provides no explicit guidance on when to use this versus alternatives (e.g., 'use files-attach to add files') or prerequisites (e.g., 'use notes-get to retrieve file_id').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

instructions-getA
Read-onlyIdempotent
Inspect

Get LLM instructions at the specified level. Call with level 'brain' early in conversations to learn user preferences. Required: level ('brain'|'personal_root'|'container'|'team'). Optional: id (integer, required for 'container' and 'team' levels). 'container' level returns the full inheritance chain (personal root -> ancestors -> container).

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoContainer ID or Team ID (required for 'container' and 'team' levels)
levelYesInstruction level: 'brain' (global), 'personal_root', 'container', or 'team' (required)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral context beyond annotations: specifies that 'container' level 'includes inheritance chain' (revealing aggregation behavior) and provides timing guidance for 'brain' level. Consistent with readOnlyHint=true annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, perfectly front-loaded: first states purpose, second maps levels to use cases. No redundancy. Parenthetical details ('call early', 'includes inheritance chain') efficiently convey critical behavioral constraints without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple read-only operation with complete schema coverage and clear annotations, the description adequately covers all necessary context: purpose, level semantics, inheritance behavior, and usage timing. No output schema exists but return value is self-evident from 'Get' verb.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, adds semantic meaning to enum values (e.g., explaining 'brain' means 'global instructions', 'container' involves inheritance) and contextualizes when 'id' parameter is needed through the level descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'LLM instructions' and scope 'at the specified level'. Clearly distinguishes from sibling 'instructions-update' by being read-only, and differentiates from 'containers-get'/'teams-get' by focusing on instruction retrieval rather than entity metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit mapping for each level value ('brain' for global/call early, 'personal_root' for personal space, etc.) and temporal guidance ('call early in conversations'). Implicitly contrasts with update operations but does not explicitly name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

instructions-updateA
DestructiveIdempotent
Inspect

Update LLM instructions at the specified level. Required: level ('brain'|'personal_root'|'container'|'team'), instructions (string). Optional: id (integer, required for 'container' and 'team'), mode ('replace' default|'append'). In 'replace' mode (default), the provided text overwrites existing instructions. In 'append' mode, the text is appended to existing instructions with a newline separator. Always read current instructions first before replacing to avoid losing existing content.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoContainer ID or Team ID (required for 'container' and 'team' levels)
modeNoUpdate mode: 'replace' (default) overwrites existing instructions, 'append' adds to them
levelYesInstruction level to update (required)
instructionsYesThe instructions text. In 'replace' mode (default), this overwrites existing instructions. In 'append' mode, this is appended to existing instructions.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this is a write operation (readOnlyHint: false) and non-destructive (destructiveHint: false). The description adds semantic context for the level parameter but does not disclose behavioral details like the replacement mechanism (described in schema) or immediate effects of the update.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes the core operation, second provides the essential enum mapping. Information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema with full coverage and present annotations, the description is sufficient for invocation. It could be improved by noting that 'id' is conditionally required or referencing the 'instructions-get' sibling for read operations, but the core functionality is adequately covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds significant value by explaining what each enum value for 'level' actually represents (e.g., 'brain' means global), which the schema only lists without semantic definitions. It compensates for the schema's lack of conceptual explanation for the level options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Update) and resource (LLM instructions) with scope (at the specified level). It effectively distinguishes from sibling 'instructions-get' by stating the update operation and level-specific scoping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides crucial mapping of enum values to their semantic meaning ('brain' = global, 'personal_root' = personal space, etc.), which guides correct level selection. However, it lacks explicit guidance on when to use this versus 'instructions-get' or prerequisites like when 'id' is required.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notes-createAInspect

Create a new note. Required: title (string). Optional: body (Markdown with [[id:Note Title]] wiki-links), summary, source_url, container_id, tag_list (comma-separated), team_id (to create in a team). Example: {title: 'Meeting Notes', body: '## Agenda\n...', container_id: 5, tag_list: 'meetings, q4'}.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyNoNote body content (Markdown with [[id:Note Title]] wiki-links)
titleYesNote title (required)
summaryNoShort summary of the note
team_idNoCreate note in this team instead of personal space
tag_listNoComma-separated list of tags (e.g., 'ruby, rails, testing')
source_urlNoSource URL reference
container_idNoContainer ID to place the note in
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish this is a non-destructive write operation. The description adds valuable behavioral context: the body supports Markdown, and specifically uses [[id:Note Title]] syntax for wiki-links. It also clarifies the team vs. personal workspace behavior not fully captured by annotations alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose. Every sentence adds value: sentence 1 defines the action, sentence 2 explains content format, sentence 3 explains workspace scoping. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters but 100% schema coverage and good annotations, the description adequately covers the primary usage patterns (markdown support, wiki-link syntax, team placement). It omits explicit mention of container_id, tag_list, and source_url, but these are well-documented in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description repeats the wiki-link format already present in the schema and adds minimal new semantic detail for the other 6 parameters (title, summary, tag_list, source_url, container_id).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Create a new note,' providing a specific verb and resource. It implicitly distinguishes from sibling tools (notes-get, notes-list, notes-update, notes-delete) by specifying the 'create' action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for the team_id parameter ('Use team_id to create in a team'), implying the default is personal space. However, it lacks explicit guidance on when to use this versus notes-update for existing notes, or how it relates to containers-create.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notes-deleteB
Destructive
Inspect

Permanently delete a note. This action is irreversible — the note and all its file attachments are destroyed. Required: id (integer).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNote ID (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While the description aligns with annotations (destructiveHint: true) by stating 'permanently', it adds no other behavioral context beyond what annotations provide. It fails to disclose side effects (e.g., whether attached files are cascaded or orphaned), recovery options, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only four words. Every word serves a purpose, with 'permanently' front-loaded to emphasize the destructive nature. No redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter destructive operation with complete schema coverage and clear annotations, the description is minimally sufficient. However, given the destructive nature, it could improve by mentioning output behavior (void vs. confirmation) or cascading effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the single 'id' parameter is fully documented in the schema itself. The description adds no parameter-specific context, which is acceptable given the high schema coverage, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete) and resource (note) with the critical qualifier 'permanently' indicating irreversibility. However, it doesn't explicitly differentiate from sibling tools like files-remove, though the tool name makes the resource clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to delete vs. update a note) or prerequisites (e.g., ownership requirements). It merely states what the tool does.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notes-getA
Read-onlyIdempotent
Inspect

Get a single note by ID, including its full Markdown body, tags, container path, linked notes (outgoing), backlinks (incoming links from other notes), file attachments, and inherited LLM instructions. Required: id (integer).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNote ID (required)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations confirm this is read-only and non-destructive, the description adds valuable behavioral context about the return payload, specifying it includes 'full body content, tags, container, linked notes, and file attachments'. This compensates for the absence of an output schema by detailing what data richness to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that front-loads the core action and efficiently enumerates all included content types without redundant phrasing. Every clause serves to clarify the retrieval scope or return value composition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single required parameter, read-only operation) and comprehensive annotations, the description adequately explains the retrieval behavior and return value structure. The enumeration of returned fields provides sufficient completeness despite the lack of a formal output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'id' parameter, the description reinforces the parameter's purpose by mentioning 'by ID' but does not add additional semantic details like ID format or discovery methods. This meets the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Get' with the resource 'note' and clarifies the scope 'single note by ID'. This effectively distinguishes it from sibling tools like notes-list (plural retrieval) and notes-create/notes-update (mutation operations).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'by ID' clearly establishes that this tool requires a specific identifier, implicitly guiding users toward list or search tools for discovery first. However, it does not explicitly name these alternative tools or provide explicit 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notes-listA
Read-onlyIdempotent
Inspect

List notes with optional filtering, sorting, and pagination. Returns paginated results. Optional: team_id (integer) to list team notes, scope ('active'|'archived'|'inbox'|'favorited'), container_id (integer) with include_nested (boolean), tags (array of strings, AND logic), tag_ids (array of integers, AND logic), summary_stale (boolean, filter to notes with outdated summaries), sort ('recent'|'oldest'|'title'), page (integer, default 1), per_page (integer, max 100, default 25). Example: list ruby-tagged notes in a container: {container_id: 5, tags: ['ruby']}.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default: 1)
sortNoSort order: 'recent' (updated_at desc, default), 'oldest' (updated_at asc), or 'title' (alphabetical)
tagsNoFilter to notes with ALL these tags by name (AND logic). Example: ['ruby', 'rails']
scopeNoFilter scope (default: active). 'inbox' and 'favorited' only for personal notes.
tag_idsNoFilter to notes with ALL these tags by ID (AND logic)
team_idNoList notes in this team instead of personal notes
per_pageNoResults per page, max 100 (default: 25)
container_idNoFilter by container ID
summary_staleNoFilter to notes with outdated summaries (default: not filtered)
include_nestedNoInclude notes from sub-containers when container_id is set (default: false)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only safety (readOnlyHint=true), so the description adds valuable behavioral context by disclosing 'Returns paginated results'—critical information given the lack of an output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose and critical behavior (pagination), followed by specific usage guidance. Four sentences with minimal redundancy. The repetitive 'Use X to...' structure is slightly mechanical but keeps the guidance scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 10 parameters and no output schema, the description covers pagination and major filtering mechanisms adequately but does not describe the return structure (what fields the notes objects contain). Sufficient for basic invocation but leaves output handling ambiguous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage (baseline 3), the description adds meaningful usage context by grouping related parameters (linking container_id with include_nested, distinguishing tag/tags/tag_ids use cases) and clarifying their relationships beyond individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (List) and resource (notes) clearly, and mentions filtering/sorting capabilities. However, it does not explicitly differentiate from the sibling 'search' tool, which could cause confusion about when to use listing vs. searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides concrete usage patterns for key parameters (team_id for teams, container_id with include_nested for sub-containers, tag variants for filtering). However, it lacks explicit when-to-use/when-not-to-use guidance contrasting with siblings like 'notes-get' or 'search'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

notes-updateB
DestructiveIdempotent
Inspect

Update a note. Required: id (integer). Optional content: title, body (full replace), append_body (appends to existing body — mutually exclusive with body), summary, source_url. Optional organization: container_id (move note), archived (boolean, personal only), favorited (boolean). Optional tags: tag_list (full replace, comma-separated), add_tags (comma-separated tags to add), remove_tags (comma-separated tags to remove). tag_list takes precedence over add_tags/remove_tags if both provided. Example — append and add a tag: {id: 42, append_body: '\n## Update\nNew info here', add_tags: 'updated'}.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesNote ID (required)
bodyNoNew body content — full replacement (mutually exclusive with append_body)
titleNoNew title
summaryNoNew summary
add_tagsNoComma-separated tags to add to existing tags (ignored if tag_list is provided)
archivedNoArchive (true) or unarchive (false) the note. Personal notes only.
tag_listNoFull replacement comma-separated tag list (takes precedence over add_tags/remove_tags)
favoritedNoFavorite (true) or unfavorite (false) the note. Personal and team notes.
source_urlNoNew source URL
append_bodyNoContent to append to the existing body (mutually exclusive with body)
remove_tagsNoComma-separated tags to remove from existing tags (ignored if tag_list is provided)
container_idNoMove to this container
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false). The description implies partial-update semantics (lists specific fields that 'can' be updated), but does not clarify error behavior, idempotency, or what happens if the note ID doesn't exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero redundancy. The first establishes the core purpose; the second efficiently enumerates the specific update capabilities without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema (9 parameters, 100% coverage) and safety annotations, the description covers the essential functionality. However, for a mutation tool, it lacks details on success/failure signals, partial update confirmation, or side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. The description maps prose concepts ('moving to a container', 'changing tags') to parameter names but does not add syntactic details, format constraints, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool updates existing notes and enumerates specific capabilities (archiving, favoriting, container moves) that distinguish it from sibling tools like notes-create or notes-delete. However, it does not explicitly contrast with creation or deletion workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists capabilities but provides no explicit guidance on when to use this versus notes-create (e.g., 'use this to modify existing notes by ID, not to create new ones'), nor does it mention prerequisites like the note ID requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tags-createCInspect

Create a new tag. Check tags-list first to avoid duplicates. Required: name (string). Tag names are automatically lowercased.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesTag name (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is a non-destructive write operation (readOnlyHint=false, destructiveHint=false). The description adds no behavioral context beyond this, failing to disclose return values (ID, full object?), idempotency, error conditions, or side effects of the creation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief at four words, containing no redundancy or filler. While appropriately concise for a single-parameter tool, it borders on under-specification and could benefit from one additional sentence regarding constraints or return values without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one required string parameter) and absence of an output schema, the description meets minimum viability by identifying the core action. However, it omits return value documentation and uniqueness constraints that would be necessary for a complete agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the 'name' parameter. The description makes no mention of parameters, but since the schema is complete, it meets the baseline expectation without adding supplementary semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb (create) and resource (tag), making the basic purpose unambiguous. However, it lacks specificity about what kind of tag is being created (e.g., for files, notes, or containers) and does not differentiate from sibling operations like tags-list beyond the obvious CRUD distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, prerequisites for creation (e.g., uniqueness constraints, permissions), or what happens if a tag with the same name already exists. No 'when-not' scenarios or workflow context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tags-listC
Read-onlyIdempotent
Inspect

List all tags with their notes_count. Paginated. Optional: page (integer), per_page (integer).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
per_pageNoResults per page
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing this is a safe read operation. The description adds no behavioral context about pagination defaults, total result limits, or what 'tags' represent in this system's domain (files, notes, etc.).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The three-word description is maximally efficient with zero redundancy. However, it borders on underspecification given the lack of output schema and unclear relationship to the broader tag system.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list operation with full schema coverage, the description meets minimum viability but lacks context about return value structure (no output schema exists) or how tags relate to other entities (notes, files) in the sibling set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the 'page' and 'per_page' parameters, the schema carries the semantic burden. The description doesn't add usage guidance for these pagination parameters (e.g., default behavior when omitted), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (List) and resource (tags), making the tool's function immediately apparent. While it doesn't explicitly differentiate from sibling 'tags-create', the distinction is implicit in the action verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search' or how to handle pagination. It doesn't indicate whether clients should paginate through all results or if there are limits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

teams-getB
Read-onlyIdempotent
Inspect

Get team details including the 10 most recent notes. Required: id (integer).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesTeam ID (required)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish readOnlyHint=true and destructiveHint=false. The description adds valuable context that 'recent notes' are included in the response, which is not evident from annotations. However, it omits details about how many notes constitute 'recent' or what other 'details' are returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action verb. There is no redundant or filler text; every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single required parameter, read-only operation, no output schema), the description adequately covers the essential behavior. It specifies the inclusion of recent notes, providing sufficient context for an agent to understand what data is retrieved without needing to detail the response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'id' parameter is fully documented as 'Team ID' in the schema), the baseline is 3. The description adds no additional semantic meaning for the parameter (e.g., where to obtain the ID, format specifics) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific action ('Get') and resource ('team details'), and adds scope via 'including recent notes.' However, it does not explicitly differentiate from sibling 'teams-list' (collection vs. single resource retrieval), though the singular/plural naming provides implicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not clarify whether to use 'teams-list' when searching for a team without an ID, or whether 'notes-list' is preferred when only notes are needed rather than full team details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

teams-listA
Read-onlyIdempotent
Inspect

List all teams the user is a member of, including members_count, notes_count, and containers_count for each team. No parameters required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable scope constraint ('user is a member of') defining the result set, but does not disclose pagination behavior, rate limits, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action and scope. No redundant words; the phrase 'the user is a member of' is essential for defining the result set boundary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only list operation with good annotations coverage and no parameters, the description is sufficient. Minor gap: no output schema exists and description does not characterize the return format (e.g., list of IDs vs full objects).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool accepts zero parameters with 100% schema coverage. Per calibration guidelines, 0 parameters establishes a baseline of 4; the description correctly implies no configuration is needed by omitting parameter discussion.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('List'), resource ('teams'), and scope constraint ('the user is a member of'), clearly distinguishing it from sibling 'teams-get' which likely retrieves a single team.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage context by specifying 'all teams,' suggesting it is for enumeration rather than specific retrieval, but does not explicitly contrast with 'teams-get' or state when to prefer one over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.