Skip to main content
Glama
Ownership verified

Server Details

A self-improving memory layer. Your memory, notes, tasks and goals, remembered everywhere.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

18 tools
create_taskCreate TaskAInspect

Create a task. For recurring tasks, provide freq and recurrence pattern fields.

One-off task fields: deadline, scheduled_date, has_scheduled_time, has_deadline_time Recurring task fields (require freq): scheduled_time, deadline_offset_days, deadline_time, days_of_week, days_of_month, week_position, months, interval Do NOT mix one-off and recurring fields (e.g. scheduled_date + scheduled_time is invalid).

Recurrence examples:

  • Every weekday at 9am: freq="weekly", days_of_week=[1,2,3,4,5], scheduled_time="09:00"

  • Every 3 days: freq="daily", interval=3

  • Monthly on the 15th: freq="monthly", days_of_month=[15]

  • First Monday of month: freq="monthly", week_position=1, days_of_week=[1]

Frequency rules:

  • daily: No modifier fields needed

  • weekly: days_of_week required [0=Sun..6=Sat]

  • monthly: Either days_of_month OR (week_position + single day_of_week)

  • yearly: months [1-12] + days_of_month required

Use scheduled_date for when to work on it, deadline for when it must be done. Time support: Both deadline and scheduled_date support full ISO datetime. Set has_scheduled_time=true when scheduling at a specific time.

ParametersJSON Schema
NameRequiredDescriptionDefault
freqNoRecurrence frequency - if provided, creates recurring task
tagsYesTags to categorize the task (max 5, each max 50 chars)
titleYesTask title - the primary identifier (required, max 200 chars)
monthsNoMonths [1=Jan..12=Dec] for yearly
contentNoOptional description/notes for the task (max 2000 chars)
deadlineNoDeadline in ISO format (for one-off tasks)
intervalNoRepeat every N freq-periods (default: 1)
priorityNoPriority levelmedium
days_of_weekNoDays of week [0=Sun..6=Sat] for weekly/monthly-by-position
days_of_monthNoDays of month [1-31, -1=last day] for monthly/yearly
deadline_timeNoTime of day for deadline (HH:MM format, recurring only)
week_positionNoWeek position: 1-5 or -1 (last) for monthly by-position
scheduled_dateNoScheduled date/time in ISO format (for one-off tasks)
scheduled_timeNoTime of day for each recurrence (HH:MM format)
duration_minutesNoEstimated duration in minutes (1-1439, max ~24h)
has_deadline_timeNoSet to true if deadline includes a specific time
has_scheduled_timeNoSet to true if scheduled_date includes a specific time
deadline_offset_daysNoDays after occurrence date for deadline (recurring only)

Output Schema

ParametersJSON Schema
NameRequiredDescription
taskNo
errorNo
successYes
recurrence_idNoRecurrence template ID (set if recurring task was created)
recurrence_patternNoHuman-readable recurrence description
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover basic safety profile (readOnlyHint: false, destructiveHint: false). Description adds substantial domain behavior: validation rules about field exclusivity, recurrence pattern logic with concrete examples, and time handling nuances. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Length is appropriate for 18 parameters with complex mutual constraints. Well-structured with clear sections: field lists, prohibition rules, examples, and frequency rules. Each sentence earns its place by conveying validation logic or examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Thoroughly addresses the tool's complexity by covering recurrence rules, field validation, and examples. Given the presence of an output schema, the description appropriately omits return value details while fully specifying input behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds significant value by grouping parameters by task type (one-off vs. recurring), explaining field interdependencies, and providing usage examples that clarify semantic intent beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear verb+resource ('Create a task') and distinguishes internal modes (one-off vs. recurring). However, it does not explicitly differentiate from sibling tools like update_task or save_goal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when/when-not guidance for field combinations ('Do NOT mix one-off and recurring fields'), frequency-specific requirements, and semantic usage ('Use scheduled_date for when to work on it'). Lacks explicit naming of alternative sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_goalDelete GoalA
DestructiveIdempotent
Inspect

Permanently delete a goal.

This is a hard delete - the goal cannot be recovered. Use update_goal with status='achieved' if you want to mark a goal as achieved instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
goal_idYesID of the goal to delete

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare destructiveHint=true, the description adds critical domain-specific context explaining 'This is a hard delete - the goal cannot be recovered.' This clarifies the irreversible nature of the operation beyond the boolean flag, though it could mention idempotency explicitly given the idempotentHint=true annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences each serving distinct purposes: (1) action definition, (2) behavioral severity warning, (3) alternative guidance. No wasted words, front-loaded with the critical permanent nature, logically sequenced.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the destructive nature and presence of output schema, the description covers the essential behavioral traits (hard delete vs soft delete) and recovery implications. It successfully warns of consequences without needing to detail return values since output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with goal_id fully described as 'ID of the goal to delete'. The description provides baseline adequacy by implying the target through the action 'delete a goal', matching the schema without adding redundant detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Permanently delete a goal' providing a specific verb (delete) and resource (goal). It distinguishes itself from sibling tools like save_goal and update_goal by emphasizing the permanent/hard delete nature versus updating status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when NOT to use the tool ('if you want to mark a goal as achieved instead') and names the specific alternative (update_goal with status='achieved'). This provides clear guidance for the common confusion between deletion and status updates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_memoryDelete MemoryA
DestructiveIdempotent
Inspect

Permanently delete a memory. This cannot be undone.

ParametersJSON Schema
NameRequiredDescriptionDefault
memory_idYesID of the memory to delete

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
successYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Reinforces destructiveHint=true with 'permanently' and 'cannot be undone', adding permanence context. However, fails to disclose idempotentHint=true (safe to retry), which is a critical behavioral trait for delete operations not obvious from the description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences totaling nine words. Front-loaded with action, zero redundancy, every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter destructive operation given presence of annotations and output schema. Would benefit from mentioning idempotency behavior to be complete regarding all annotated behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter is well-documented in the schema. Description adds no extra parameter semantics, which is acceptable given the high schema coverage, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'delete' with resource 'memory', and 'permanently' distinguishes from update_memory. Resource type clearly differentiates from sibling delete operations (delete_goal, delete_note, delete_task).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides irreversibility warning ('cannot be undone') implying when not to use, but lacks explicit alternatives (e.g., use update_memory for modifications) or explicit when-to-use guidance beyond the action itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_noteDelete NoteA
DestructiveIdempotent
Inspect

Permanently delete a note.

Use when:

  • Knowledge is no longer relevant

  • Note was created in error

  • Information is outdated and should be removed

ParametersJSON Schema
NameRequiredDescriptionDefault
note_idYesID of the note to delete

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
successYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The word 'Permanently' reinforces the destructiveHint=true annotation, adding emphasis on irreversibility. However, the description does not address behavioral aspects not covered by annotations, such as idempotency implications (idempotentHint=true), cascading deletion effects, or error handling when the UUID does not exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with the core action stated first, followed by bullet-pointed usage scenarios. Every sentence earns its place—there is no redundancy or extraneous information despite covering action, permanence, and use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter destructive operation with complete annotations and an output schema (per context signals), the description provides sufficient context. It appropriately delegates return value details to the output schema and safety properties to annotations, though it could briefly note the idempotent nature for complete behavioral clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single note_id parameter (including format, pattern, and description), the schema fully documents the input structure. The description adds no parameter-specific details, which is acceptable given the comprehensive schema documentation, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Permanently delete' and the resource 'note', clearly distinguishing it from sibling tools like delete_goal, delete_memory, and delete_task. The scope is immediately unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Use when:' section provides three explicit scenarios for invocation (irrelevant knowledge, creation error, outdated information). While this offers clear usage context comparable to a 'when-to-use' specification, it does not mention alternatives such as update_note for modifications or contrast with delete_memory for knowledge management.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_taskDelete TaskA
DestructiveIdempotent
Inspect

Delete a task. For recurring tasks, use apply_to to control scope.

  • "this" (default): For recurring tasks, skips this occurrence (prevents re-showing as virtual). For one-off tasks, permanently deletes.

  • "all_future": Deletes the recurrence template. Stops generating new instances. Already-completed instances are preserved.

ParametersJSON Schema
NameRequiredDescriptionDefault
task_idYesTask ID (real UUID or virtual ID like "virtual:uuid:YYYY-MM-DD")
apply_toNoScope: "this" for single task/occurrence, "all_future" to delete recurrence templatethis

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish destructiveness and idempotency; the description adds valuable behavioral context including what gets preserved (completed instances), virtual task handling, and the distinction between skipping occurrences versus deleting templates. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with the core action ('Delete a task'), followed immediately by the primary complexity (recurring tasks). The bulleted enum explanations are necessary and well-structured, though the quote formatting is slightly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully complete for a deletion tool with complexity. Covers the recurrence edge case thoroughly. Output schema exists (so return values needn't be described), and annotations provide safety metadata, allowing the description to focus on business logic.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description significantly enriches parameter semantics by explaining the behavioral consequences of each enum value beyond the schema's basic scope definitions, particularly the nuanced effects on recurring tasks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Delete a task' with specific verb and resource. It distinguishes from siblings (delete_goal, delete_note, etc.) by detailing task-specific recurrence behavior, virtual IDs, and occurrence handling that is unique to the task domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides excellent guidance on when to use each apply_to value ('this' vs 'all_future') with clear behavioral outcomes. However, it lacks explicit comparison to sibling delete_* tools (e.g., 'use delete_goal for goals instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_noteGet NoteA
Read-onlyIdempotent
Inspect

Retrieve the full content of a specific note by ID.

Use after search_notes to load the detailed content of a note you need. Returns the complete note including full content (up to 10,000 chars).

ParametersJSON Schema
NameRequiredDescriptionDefault
note_idYesID of the note to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteYes
errorNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral details absent from annotations: content size limit (10,000 chars) and return completeness. Correctly aligns with readOnlyHint=true by describing a retrieval operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with purpose, followed by usage guideline, then return value constraints. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a single-parameter read operation. Mentions return characteristics without duplicating output schema (which exists). Includes size limits and workflow context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete description of note_id. Description implies parameter sourcing via 'Use after search_notes' but does not augment the schema's parameter definition, warranting baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Retrieve' + resource 'note' + scope 'full content by ID'. Distinguishes from sibling 'search_notes' by specifying 'full content' vs implied search summaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Use after search_notes to load the detailed content'. Names the sibling tool directly and establishes the correct invocation sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_user_profileGet User ProfileA
Read-onlyIdempotent
Inspect

PRIMARY TOOL - Call this at the START of every conversation to load comprehensive user context.

Returns:

  • current_datetime: Current date and time in the user's timezone (ISO 8601 with offset)

  • All active facts about the user (preferences, personal info, relationships)

  • tasks_overdue: Tasks with scheduled_date OR deadline in the past

  • tasks_today: Tasks scheduled OR due today (time >= now), plus unscheduled tasks (no date set)

  • tasks_tomorrow: Tasks scheduled OR due tomorrow (includes projected recurring tasks)

  • Active goals

  • Recent moments from the last 5 days

  • Latest 15 user-facing notes (id + description). Use get_note to retrieve full content.

  • ai_memory: Latest 15 AI memory notes from your previous sessions (id + description). Use get_note to retrieve full content.

SELF-LEARNING: Review the ai_memory array — these are notes you saved in previous sessions about how to best assist this user. Load relevant ones with get_note. Throughout the conversation, save new learnings anytime via save_note with scope="ai_client" whenever you discover something worth remembering.

  • tasks_recently_completed: Tasks completed or skipped in the last 7 days

Each task includes:

  • category_reason: 'scheduled' | 'deadline' | 'both' - explains why it's in that array

  • has_scheduled_time: true if task has a specific scheduled time, false if all-day

  • has_deadline_time: true if deadline has a specific time, false if all-day

Task placement uses scheduled_date when present, otherwise deadline. Each task appears in exactly one category.

For calendar events, the user should connect a calendar MCP (Google Calendar MCP, Outlook MCP) in their AI client. Query those MCPs alongside Anamnese for a complete daily view.

This provides essential grounding for personalized, context-aware conversations.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
factsYes
goalsYes
notesYesLatest 15 user-facing notes (description + id). Use get_note or search_notes to retrieve full content.
momentsYesRecent moments from the last 5 days
timezoneYesUser's IANA timezone (e.g., "America/New_York")
ai_memoryYesYour own persistent memory from previous sessions — what you have learned about this user and how to help them. Review these and load relevant ones with get_note. Save new learnings anytime via save_note with scope="ai_client".
tasks_todayYesTasks scheduled OR due today (time >= now)
tasks_overdueYesTasks with past scheduled_date OR past deadline
tasks_tomorrowYesTasks scheduled OR due tomorrow (includes projected recurring tasks)
current_datetimeYesCurrent date and time in user's timezone (e.g., "Saturday, February 7, 2026 14:30 (America/New_York)")
tasks_recently_completedYesTasks completed or skipped in the last 7 days. Avoid re-suggesting these.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations establish read-only/idempotent safety, the description adds crucial behavioral context: pagination limits (15 notes, 5 days), timezone handling (ISO 8601 with offset), task categorization logic ('scheduled_date when present, otherwise deadline'), and self-learning workflows ('Review the ai_memory array'). Could disclose error conditions or rate limits for a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Despite length, every sentence serves distinct purposes: primary directive, output specification, cross-tool relationships, and data interpretation rules. Well-structured with clear visual hierarchy (buckets, bullet points, capitalized section headers). Minor deduction for density that could overwhelm, though justified by output complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Exceptionally complete for a zero-parameter, high-output tool. Explains return structure in lieu of visible output schema, documents task field semantics ('category_reason', 'has_scheduled_time'), temporal filtering logic, and integration patterns with external calendar MCPs. Addresses the full conversation lifecycle from initialization through self-learning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, triggering baseline score of 4 per rubric. No parameters require semantic explanation, and the empty schema is inherently unambiguous.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'loads comprehensive user context' and details specific data categories returned (datetime, tasks, goals, memories, notes). It distinguishes itself from siblings by declaring itself the 'PRIMARY TOOL' to call at the 'START of every conversation,' clearly establishing its unique role as the initialization/grounding tool versus mutation or search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Call this at the START of every conversation'). Identifies complementary tools ('use get_note to retrieve full content') and alternatives ('Query those MCPs [calendar] alongside Anamnese'). Includes workflow instructions for self-learning ('save new learnings...via save_note').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_goalSave GoalAInspect

Create a new goal for the user.

Goals are aspirational objectives that represent longer-term ambitions. They can be personal or professional.

Examples:

  • Personal: "Learn Spanish to conversational fluency", "Run a marathon"

  • Professional: "Get promoted to senior engineer", "Launch my own product"

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags to categorize the goal (max 5, each max 50 chars)
contentYesGoal description - what the user wants to achieve (5-500 chars)
goal_typeYesType of goal

Output Schema

ParametersJSON Schema
NameRequiredDescription
goalNo
errorNo
successYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description confirms the write operation implied by annotations (readOnlyHint: false). It adds valuable domain context about what constitutes a goal (aspirational vs. immediate) and provides concrete examples. However, it does not disclose operational details like error handling, duplicate creation behavior (though idempotentHint: false exists), or return value structure beyond what the output schema provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear action statement first, followed by conceptual definitions and categorized examples. Every sentence earns its place—the conceptual distinction is necessary given the create_task sibling. The examples are formatted efficiently and helpfully.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (not shown but indicated) and comprehensive input annotations (100% coverage), the description appropriately focuses on domain definition and differentiation. It adequately covers the tool's purpose without needing to describe return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value beyond the schema by providing concrete examples of content ('Learn Spanish to conversational fluency', 'Run a marathon') that illustrate expected parameter values, style, and length constraints for the 'content' field, helping the agent construct valid inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Create a new goal' and defines goals as 'aspirational objectives' and 'longer-term ambitions,' which implicitly distinguishes them from tasks (create_task). However, it does not explicitly reference sibling tools like update_goal or create_task to clarify when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance by characterizing goals as 'longer-term ambitions' versus presumably shorter-term tasks, and provides clear categorization (personal/professional). However, it lacks explicit 'when to use vs. alternatives' statements, such as when to choose this over create_task or how it differs from update_goal.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_memorySave MemoryBInspect

Save a memory about the user.

type "fact" = Stable truths (preferences, personal info, relationships). Stays true for months/years. Examples: "User is a software engineer", "User prefers TypeScript", "User's dog is named Max"

type "moment" = Time-bound events, decisions, experiences. Include occurred_at when timing matters. Examples: "User decided to prioritize X", "User had a meeting about Q1 OKRs"

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags to categorize the memory (max 5, each max 50 chars)
typeYesMemory type: "fact" for stable truths, "moment" for time-bound events
contentYesThe memory content (10-5000 chars)
occurred_atNoISO datetime when the moment occurred, e.g. "2025-06-15T14:30:00-05:00" (defaults to now, only for moments)

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
memoryNo
successYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a write operation (readOnlyHint: false) and non-destructive. The description adds valuable semantic context: facts persist months/years, moments are time-bound, and occurred_at is specifically for moments. However, given idempotentHint: false in annotations, the description should disclose that repeated calls likely create duplicates and whether the system performs deduplication, which it does not.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with a clear mission statement followed by type definitions with clear headings and illustrative examples. Every sentence serves a purpose—either defining scope, explaining taxonomy, or guiding parameter usage. No redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers the domain model thoroughly and output schema exists to document returns, it lacks critical behavioral information given the tool's non-idempotent nature (idempotentHint: false). For a persistence tool, it should clarify whether duplicate detection exists or if the agent must check 'search_memories' first, which is essential for correct usage but absent here.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents technical constraints. The description adds significant semantic value beyond the schema: concrete examples for each type ('User prefers TypeScript', 'User had a meeting'), durability semantics for facts, and guidance linking 'occurred_at' specifically to moments. This helps the agent understand appropriate parameter values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Save[s] a memory about the user' with specific verb and resource. It effectively distinguishes the two memory types (fact vs moment) with detailed examples and durability expectations. However, it fails to clarify the distinction from sibling tool 'update_memory' (whether this creates new memories only or upserts existing ones), which creates ambiguity in the presence of similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides excellent guidance on when to use 'fact' versus 'moment' types and when to include the 'occurred_at' parameter. However, it offers no guidance on when to use this tool versus 'update_memory' or whether to check 'search_memories' first to avoid duplicates, nor does it mention prerequisites or preconditions for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_noteSave NoteAInspect

Save a new note with learned knowledge or procedures.

Notes store knowledge you learn during conversations that might be useful later:

  • How to do something in this codebase/project

  • Procedures, configurations, or technical details

  • Solutions to problems encountered

  • Project-specific knowledge

Notes have two parts:

  • description: Short summary for searching (max 500 chars)

  • content: Detailed knowledge (max 10,000 chars)

Use notes for LEARNED KNOWLEDGE. Use facts for TRUTHS ABOUT THE USER.

Examples:

  • description: "How to deploy this Next.js project to Vercel" content: "1. Run 'vercel' command... 2. Configure environment variables..."

  • description: "Database migration process for this project" content: "Migrations are in supabase/migrations/. To apply: npx supabase db push..."

SELF-LEARNING (scope="ai_client"): Your persistent memory across conversations. Save a note whenever you learn something worth remembering — don't wait, save as you go. Examples:

  • User preferences: "User prefers concise answers, not long explanations"

  • Corrections: "User clarified: 'deploy' means push to staging, not production"

  • Interaction patterns: "User likes to review plans before I execute"

  • What works or doesn't: "Suggesting refactors unprompted frustrates this user"

The more you learn and remember, the better you become at helping this user.

ParametersJSON Schema
NameRequiredDescriptionDefault
scopeNo"user" (default) for user-facing notes, "ai_client" for your own persistent memory across conversationsuser
contentYesDetailed knowledge content (max 10,000 chars)
descriptionYesShort summary for searching (max 500 chars)

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteNo
errorNo
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds persistence model context ('persistent memory across conversations') and content behavior (max lengths, searchability) beyond annotations. Annotations indicate non-destructive write (readOnly=false, destructiveHint=false), but description could explicitly clarify that duplicate calls create duplicate notes given idempotentHint=false.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (purpose, fields, scope guidance, examples), though slightly verbose. The SELF-LEARNING section and examples earn their place by clarifying the critical ai_client scope behavior, but some sentences like 'The more you learn...' are promotional rather than instructional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a dual-purpose memory tool. Covers content types, persistence scope, field semantics, and distinguishes from sibling memory tools. Output schema exists per context signals, so return values need not be described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds rich semantic value: concrete YAML examples for both fields, explains that description is 'for searching' while content is 'detailed knowledge', and provides extensive scope='ai_client' vs scope='user' guidance with specific content examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Save a new note with learned knowledge or procedures' and distinguishes from siblings by contrasting 'LEARNED KNOWLEDGE' (notes) vs 'TRUTHS ABOUT THE USER' (facts/memory), and clarifies scope='user' vs scope='ai_client' usage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Save a note whenever you learn something worth remembering'), clear alternatives ('Use facts for TRUTHS ABOUT THE USER'), and specific triggers including user preferences, corrections, and interaction patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_goalsSearch GoalsA
Read-onlyIdempotent
Inspect

Search user goals with optional filters.

Use to find goals by keyword, type (personal/professional), status (active/achieved), or tags. Without filters, returns goals by recency.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoFilter by tags (ANY match, max 5)
limitNoMaximum number of goals to return (1-100)
queryNoKeyword search in goal content (max 200 chars)
statusNoFilter by status
goal_typeNoFilter by goal type

Output Schema

ParametersJSON Schema
NameRequiredDescription
goalsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing the safe read-only nature. The description adds valuable behavioral context not in annotations: the default sort order ('returns goals by recency') and filter semantics. However, it omits details like case-sensitivity, pagination behavior, or result structure—though the existence of an output schema mitigates this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences totaling ~25 words. Front-loaded with purpose ('Search user goals'), followed by usage patterns, then default behavior. Zero redundancy—every sentence earns its place by conveying distinct information not replicated in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given comprehensive annotations (safety hints), 100% schema coverage, and the presence of an output schema, the description achieves completeness by covering invocation context (filters), default behavior (recency), and resource identification (goals) without needing to duplicate parameter documentation or return value specifications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds meaningful value by mapping schema fields to intent (e.g., 'keyword' for the 'query' parameter) and explicitly enumerating valid values ('personal/professional', 'active/achieved') that reinforce the enum constraints, helping the agent select appropriate parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Search' and resource 'user goals', immediately distinguishing it from mutation siblings (save_goal, delete_goal, update_goal) and other search siblings (search_tasks, search_notes) by specifying the target resource. The scope 'with optional filters' clarifies the tool's broad search capability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on when to use (finding goals by specific criteria) and explains default behavior ('Without filters, returns goals by recency'), which helps agents decide between filtered and unfiltered invocation. Does not explicitly name sibling alternatives, but clearly defines the tool's specific use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_memoriesSearch MemoriesA
Read-onlyIdempotent
Inspect

Search user memories by keyword, type, tags, or date range.

With query: Case-insensitive keyword search on content. Without query: Returns memories by recency. Use type filter to search only facts or only moments.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoOptional tag filter (memories must have at least one matching tag)
typeNoFilter by memory type
limitNoMaximum number of memories to return
queryNoKeyword to search for in memory content (max 200 chars)
date_toNoISO date/datetime for end of range (e.g. "2025-12-31" or "2025-12-31T23:59:59Z"). Must be >= date_from.
date_fromNoISO date/datetime for start of range (e.g. "2025-01-01" or "2025-01-01T00:00:00Z"). Must be <= date_to.

Output Schema

ParametersJSON Schema
NameRequiredDescription
memoriesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the read-only, idempotent safety profile. The description adds valuable behavioral specifics not in annotations: case-insensitive matching for queries and recency-based sorting when no query is provided. It does not disclose pagination behavior or result limits beyond the 'limit' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with a clear purpose statement followed by three precise sentences explaining key behavioral modes (query vs. no-query, type filtering). Every sentence earns its place; there is no redundancy or fluff, and information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage, presence of output schema, and read-only annotations, the description provides sufficient context for a search operation. It explains the two primary search modes (keyword vs. recency) and type filtering. It appropriately omits output value descriptions (covered by output schema), though it could explicitly note that all parameters are optional.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds meaningful semantic value beyond the schema by specifying 'Case-insensitive' for the query parameter and explaining the type filter values ('facts' and 'moments'), which clarifies the enum intent beyond the schema's generic 'Filter by memory type' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Search user memories by keyword, type, tags, or date range,' providing a specific verb (Search), resource (memories), and scope that clearly distinguishes it from sibling search tools (search_notes, search_tasks, search_goals) by explicitly naming the target resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for parameter usage patterns ('With query:...', 'Without query:...', 'Use type filter...'), effectively guiding when to use specific filters. However, it lacks explicit guidance on when to use this tool versus alternatives like save_memory or update_memory, or tool selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_notesSearch NotesA
Read-onlyIdempotent
Inspect

Search notes by keyword or list recent notes. Returns summaries (id + description) only. Use get_note to retrieve the full content of a specific note.

With query: Case-insensitive keyword search on description and content. Without query: Returns most recently updated notes.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of notes to return (max 50)
queryNoKeyword to search for in note description and content (max 200 chars). If omitted, returns all notes up to the limit.
scopeNoFilter by scope. If omitted, returns notes of all scopes.

Output Schema

ParametersJSON Schema
NameRequiredDescription
notesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety, but description adds crucial behavioral context: specifies return format ('summaries only'), search mechanics ('case-insensitive'), and default ordering ('most recently updated notes'). Could mention pagination or rate limiting for a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences with zero redundancy: purpose upfront, sibling reference, then conditional behavior (with/without query). Every clause delivers distinct operational information. Excellent front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, presence of output schema, and comprehensive annotations, the description provides sufficient context without over-specifying. Covers search semantics, result limitations, and sibling relationships adequately for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), description adds case-insensitivity detail for query parameter and clarifies temporal ordering when query is omitted—semantics not explicit in schema descriptions. Does not elaborate on 'scope' parameter values, preventing a 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Search notes by keyword or list recent notes') with clear resource targeting. Distinguishes scope from sibling get_note by noting this returns 'summaries (id + description) only' versus full content retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly directs to alternative tool: 'Use get_note to retrieve the full content of a specific note.' Clearly delineates dual usage modes ('With query' vs 'Without query'), establishing precise conditions for keyword search versus recent listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_tasksSearch TasksA
Read-onlyIdempotent
Inspect

Search tasks with optional filters. Returns both real tasks and projected recurring task instances.

When a date range is provided (scheduled_date, scheduled_date_from/to), virtual recurring instances are automatically included. Virtual tasks have is_virtual=true and IDs starting with "virtual:". They can be passed directly to update_task or delete_task.

Filter conflicts: scheduled_date and scheduled_date_from/to are mutually exclusive. unscheduled=true cannot be combined with date filters.

Use to find tasks by keyword, date, priority, tags, or completion status. Use without filters to get recent tasks.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoFilter by tags (ANY match, max 5)
limitNoMaximum number of tasks to return
queryNoKeyword search in task content (max 200 chars)
priorityNoFilter by priority level
completedNoFilter by completion status (true=completed, false=incomplete)
deadline_toNoFilter tasks with deadline on or before this date (YYYY-MM-DD)
unscheduledNoSet to true to get only unscheduled tasks (backlog)
deadline_fromNoFilter tasks with deadline on or after this date (YYYY-MM-DD)
scheduled_dateNoFilter by scheduled date (YYYY-MM-DD)
include_virtualNoInclude projected recurring instances when date range is provided
scheduled_date_toNoFilter tasks scheduled on or before this date (YYYY-MM-DD)
scheduled_date_fromNoFilter tasks scheduled on or after this date (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription
tasksYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable behavioral context: virtual task projection mechanics (IDs starting with 'virtual:', is_virtual flag), mutual exclusivity constraints between parameters, and result composition (both real and virtual instances).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four logically sequenced paragraphs: scope definition, virtual task behavior, constraint rules, usage examples. Every sentence conveys unique operational guidance; no repetition of schema details or annotation values.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 12-parameter search tool with output schema present: covers filtering behavior, virtual task handling, result identification, and sibling tool integration. Minor gap: could explicit state default sort order or pagination behavior, though limit parameter is documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all 12 parameters (e.g., date format YYYY-MM-DD, enum values). Description adds parameter interaction logic (mutual exclusivity) but baseline 3 is appropriate since schema carries full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Search' + resource 'tasks' with clear scope 'optional filters' and distinctive feature 'Returns both real tasks and projected recurring task instances' that differentiates it from simple list operations. Explicitly distinguishes virtual vs real tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit filter conflict rules ('scheduled_date and scheduled_date_from/to are mutually exclusive'), clear when-to-use patterns ('Use to find tasks by keyword...', 'Use without filters to get recent tasks'), and guidance on consuming results with siblings ('can be passed directly to update_task or delete_task').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_goalUpdate GoalA
Idempotent
Inspect

Update an existing goal's fields.

When status is set to 'achieved', achieved_at is automatically set. When status is set to 'active', achieved_at is automatically cleared.

Use to:

  • Update goal content

  • Change goal type

  • Mark goal as achieved or reactivate

  • Change tags

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoNew tags (replaces existing, max 5)
statusNoNew status
contentNoNew goal content (5-500 chars)
goal_idYesID of the goal to update
goal_typeNoNew goal type

Output Schema

ParametersJSON Schema
NameRequiredDescription
goalNo
errorNo
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate idempotent, non-destructive mutation (readOnly: false, destructive: false, idempotent: true). The description adds valuable side-effect documentation: automatic timestamp management when status changes (achieved_at auto-set/cleared), which is critical business logic not evident in the schema or annotations. Minor gap: doesn't specify error behavior for invalid goal_ids.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: single-line purpose statement, followed by behavioral side-effects, then scannable bullet list of use cases. Every sentence earns its place. No redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), the description appropriately avoids detailing return values. It sufficiently covers mutation behavior, side effects, and field semantics for an update tool with 5 parameters and rich annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds semantic value by explaining how parameters interact (status changes trigger achieved_at updates) and grouping parameters into use-case categories (content, type, status/tags), providing context beyond the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Update an existing goal's fields' providing a specific verb (update), resource (goal), and scope (fields). The word 'existing' clearly distinguishes this from the sibling tool 'save_goal' (likely for creation), establishing this is for modification of existing records only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Use to:' section lists four specific use cases (update content, change type, mark achieved/reactivate, change tags) providing clear usage context. However, it lacks explicit guidance on when NOT to use this versus siblings like 'save_goal', and doesn't mention prerequisites such as needing to obtain the goal_id from 'search_goals' first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_memoryUpdate MemoryA
Idempotent
Inspect

Update an existing memory's content or tags.

Works for both facts and moments. Use to correct info, update tags, or expand content.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoNew tags for the memory (replaces existing, max 5)
contentNoNew content for the memory (10-5000 chars)
memory_idYesID of the memory to update

Output Schema

ParametersJSON Schema
NameRequiredDescription
errorNo
memoryNo
successYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare idempotentHint=true and destructiveHint=false. The description adds valuable behavioral context that the tool works for both 'facts and moments' subtypes, but does not disclose error behavior (e.g., missing memory_id), partial update semantics, or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: first establishes purpose, second defines applicable subtypes, third lists use cases. Front-loaded with the core operation and appropriately scoped for agent consumption.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, present annotations covering safety/idempotency, and existence of an output schema, the description provides sufficient contextual completeness for a 3-parameter mutation tool. Only minor gap is lack of explicit error handling mention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description mentions 'content or tags' reinforcing the optional parameters, but adds no syntax guidance or format details beyond the schema's maxLength constraints and UUID pattern.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Update' with clear resource 'memory' and explicitly qualifies 'existing' to distinguish from sibling save_memory. The addition of 'facts and moments' subtypes further clarifies scope beyond the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use cases ('correct info, update tags, or expand content') and implies modification context via 'existing memory.' However, it does not explicitly name the sibling alternative (save_memory) for creating new memories or state when-not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_noteUpdate NoteA
Idempotent
Inspect

Update an existing note's description or content.

Use this to:

  • Update or expand knowledge in an existing note

  • Fix or improve the description for better searching

  • Add new details learned about a topic

ParametersJSON Schema
NameRequiredDescriptionDefault
scopeNoChange the scope of the note
contentNoNew content (max 10,000 chars)
note_idYesID of the note to update
descriptionNoNew description (max 500 chars)

Output Schema

ParametersJSON Schema
NameRequiredDescription
noteNo
errorNo
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (destructive=false, idempotent=true, readOnly=false) and adds valuable business context: updates affect 'knowledge' and 'searching', explaining the purpose beyond the raw mutation. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: one clear opening sentence followed by three specific bullet points. Every line earns its place, providing use-case guidance without redundancy. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations, output schema, and 100% input schema coverage, the description provides sufficient context. The use-case bullets complete the picture for a standard CRUD update operation, though mentioning the idempotent nature or scope implications could have added value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'description or content' which maps to parameters, but does not add syntax details, validation rules, or explain the 'scope' parameter beyond the schema's own description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Update') and resource ('note'), and specifies targeting 'existing' notes which implicitly distinguishes this from the 'save_note' sibling. However, it does not explicitly differentiate from other update siblings (update_task, update_goal, etc.) beyond the resource name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Use this to:' bullets provide clear positive guidance for when to use the tool (expanding knowledge, fixing descriptions for search, adding details). However, it lacks explicit negative guidance (when NOT to use) or named alternatives (e.g., 'use save_note for new notes').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_taskUpdate TaskA
Idempotent
Inspect

Update a task. Accepts real task IDs or virtual recurring task IDs (from search_tasks).

For recurring tasks, use apply_to to control scope:

  • "this" (default): Update only this specific occurrence. Materializes virtual tasks automatically.

  • "all_future": Update the recurrence template. Changes affect all future occurrences.

Instance fields (apply_to="this" ONLY): deadline, scheduled_date, has_scheduled_time, has_deadline_time, percent_complete, is_skipped Template fields (apply_to="all_future" ONLY): freq, interval, days_of_week, days_of_month, week_position, months, scheduled_time, deadline_offset_days, deadline_time, is_active Both modes: title, content, priority, duration_minutes, tags Note: is_skipped=true and percent_complete=100 are mutually exclusive.

Common operations:

  • Complete a task: percent_complete=100

  • Skip a recurring occurrence: is_skipped=true

  • Reschedule: scheduled_date="new-date"

  • Pause a recurrence: apply_to="all_future", is_active=false

  • Change recurrence pattern: apply_to="all_future", freq=..., days_of_week=...

ParametersJSON Schema
NameRequiredDescriptionDefault
freqNoNew frequency (all_future only)
tagsNoNew tags (replaces existing, max 5)
titleNoNew task title (max 200 chars)
monthsNoNew months (all_future only)
contentNoNew task description/notes, or null to clear
task_idYesTask ID (real UUID or virtual ID like "virtual:uuid:YYYY-MM-DD")
apply_toNoScope: "this" for single occurrence, "all_future" for recurrence templatethis
deadlineNoNew deadline (ISO format) or null to clear
intervalNoNew interval (all_future only)
priorityNoNew priority level
is_activeNoPause (false) or resume (true) recurrence (all_future only)
is_skippedNoSet to true to skip this occurrence without completing
days_of_weekNoNew days of week (all_future only)
days_of_monthNoNew days of month (all_future only)
deadline_timeNoNew deadline time (all_future only)
week_positionNoNew week position (all_future only)
scheduled_dateNoNew scheduled date/time (ISO format) or null to clear
scheduled_timeNoNew time of day for occurrences (all_future only)
duration_minutesNoEstimated duration in minutes
percent_completeNoProgress percentage (0-100). Set to 100 to complete.
has_deadline_timeNoSet to true if deadline includes a specific time
has_scheduled_timeNoSet to true if scheduled_date includes a specific time
deadline_offset_daysNoNew deadline offset (all_future only)

Output Schema

ParametersJSON Schema
NameRequiredDescription
taskNo
errorNo
successYes
new_taskNoNext occurrence created when completing/skipping a recurring task
recurrence_updatedNoTrue when apply_to=all_future was used
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds materialization behavior (virtual tasks become real) and mutual exclusivity constraints (is_skipped vs percent_complete) beyond annotations. Annotations cover idempotent/safety profile; description adds recurrence-specific behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual grouping (bullet points, categorized fields). Information-dense but necessary for 23-parameter recurrence complexity. Front-loaded with core purpose and ID types before diving into field restrictions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete coverage of complex recurrence logic, field applicability rules, and constraint warnings. Output schema exists (per context signals), so return values don't need description. Sufficient for safe invocation of this mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with mode restrictions already documented, but description adds valuable grouping (Instance/Template/Both fields) and virtual ID format explanation (virtual:uuid:YYYY-MM-DD). Common operations section demonstrates semantic relationships between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Update) + resource (task) + scope (recurring vs non-recurring). Distinguishes from sibling update_goal/update_note by detailing unique recurrence handling (apply_to modes) and virtual task ID support.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance on when to use 'this' vs 'all_future' with detailed field restrictions for each mode. Lists common operations (complete, skip, reschedule) showing parameter combinations. References prerequisite search_tasks for virtual IDs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources