Skip to main content
Glama

write_note

Create or update markdown notes with semantic summaries for organizing knowledge in your local Obsidian.md vault.

Instructions

Create or update a markdown note. Returns a markdown formatted summary of the semantic content.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
titleYes
contentYes
folderYes
projectNo
tagsNo
note_typeNonote

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function for the 'write_note' MCP tool. Decorated with @mcp.tool(), it defines the tool schema via type hints and docstring, executes the logic to create or update notes with semantic features, and returns a formatted summary. Handles project resolution, path validation, tags, observations, and relations.
    @mcp.tool(
        description="Create or update a markdown note. Returns a markdown formatted summary of the semantic content.",
    )
    async def write_note(
        title: str,
        content: str,
        folder: str,
        project: Optional[str] = None,
        tags: list[str] | str | None = None,
        note_type: str = "note",
        context: Context | None = None,
    ) -> str:
        """Write a markdown note to the knowledge base.
    
        Creates or updates a markdown note with semantic observations and relations.
    
        Project Resolution:
        Server resolves projects in this order: Single Project Mode → project parameter → default project.
        If project unknown, use list_memory_projects() or recent_activity() first.
    
        The content can include semantic observations and relations using markdown syntax:
    
        Observations format:
            `- [category] Observation text #tag1 #tag2 (optional context)`
    
            Examples:
            `- [design] Files are the source of truth #architecture (All state comes from files)`
            `- [tech] Using SQLite for storage #implementation`
            `- [note] Need to add error handling #todo`
    
        Relations format:
            - Explicit: `- relation_type [[Entity]] (optional context)`
            - Inline: Any `[[Entity]]` reference creates a relation
    
            Examples:
            `- depends_on [[Content Parser]] (Need for semantic extraction)`
            `- implements [[Search Spec]] (Initial implementation)`
            `- This feature extends [[Base Design]] and uses [[Core Utils]]`
    
        Args:
            title: The title of the note
            content: Markdown content for the note, can include observations and relations
            folder: Folder path relative to project root where the file should be saved.
                    Use forward slashes (/) as separators. Use "/" or "" to write to project root.
                    Examples: "notes", "projects/2025", "research/ml", "/" (root)
            project: Project name to write to. Optional - server will resolve using the
                    hierarchy above. If unknown, use list_memory_projects() to discover
                    available projects.
            tags: Tags to categorize the note. Can be a list of strings, a comma-separated string, or None.
                  Note: If passing from external MCP clients, use a string format (e.g. "tag1,tag2,tag3")
            note_type: Type of note to create (stored in frontmatter). Defaults to "note".
                       Can be "guide", "report", "config", "person", etc.
            context: Optional FastMCP context for performance caching.
    
        Returns:
            A markdown formatted summary of the semantic content, including:
            - Creation/update status with project name
            - File path and checksum
            - Observation counts by category
            - Relation counts (resolved/unresolved)
            - Tags if present
            - Session tracking metadata for project awareness
    
        Examples:
            # Assistant flow when project is unknown
            # 1. list_memory_projects() -> Ask user which project
            # 2. User: "Use my-research"
            # 3. write_note(...) and remember "my-research" for session
    
            # Create a simple note
            write_note(
                project="my-research",
                title="Meeting Notes",
                folder="meetings",
                content="# Weekly Standup\\n\\n- [decision] Use SQLite for storage #tech"
            )
    
            # Create a note with tags and note type
            write_note(
                project="work-project",
                title="API Design",
                folder="specs",
                content="# REST API Specification\\n\\n- implements [[Authentication]]",
                tags=["api", "design"],
                note_type="guide"
            )
    
            # Update existing note (same title/folder)
            write_note(
                project="my-research",
                title="Meeting Notes",
                folder="meetings",
                content="# Weekly Standup\\n\\n- [decision] Use PostgreSQL instead #tech"
            )
    
        Raises:
            HTTPError: If project doesn't exist or is inaccessible
            SecurityError: If folder path attempts path traversal
        """
        track_mcp_tool("write_note")
        async with get_client() as client:
            logger.info(
                f"MCP tool call tool=write_note project={project} folder={folder}, title={title}, tags={tags}"
            )
    
            # Get and validate the project (supports optional project parameter)
            active_project = await get_active_project(client, project, context)
    
            # Normalize "/" to empty string for root folder (must happen before validation)
            if folder == "/":
                folder = ""
    
            # Validate folder path to prevent path traversal attacks
            project_path = active_project.home
            if folder and not validate_project_path(folder, project_path):
                logger.warning(
                    "Attempted path traversal attack blocked",
                    folder=folder,
                    project=active_project.name,
                )
                return f"# Error\n\nFolder path '{folder}' is not allowed - paths must stay within project boundaries"
    
            # Process tags using the helper function
            tag_list = parse_tags(tags)
            # Create the entity request
            metadata = {"tags": tag_list} if tag_list else None
            entity = Entity(
                title=title,
                folder=folder,
                entity_type=note_type,
                content_type="text/markdown",
                content=content,
                entity_metadata=metadata,
            )
    
            # Try to create the entity first (optimistic create)
            logger.debug(f"Attempting to create entity permalink={entity.permalink}")
            action = "Created"  # Default to created
            try:
                url = f"/v2/projects/{active_project.external_id}/knowledge/entities"
                response = await call_post(client, url, json=entity.model_dump())
                result = EntityResponse.model_validate(response.json())
                action = "Created"
            except Exception as e:
                # If creation failed due to conflict (already exists), try to update
                if (
                    "409" in str(e)
                    or "conflict" in str(e).lower()
                    or "already exists" in str(e).lower()
                ):
                    logger.debug(f"Entity exists, updating instead permalink={entity.permalink}")
                    try:
                        if not entity.permalink:
                            raise ValueError("Entity permalink is required for updates")  # pragma: no cover
                        entity_id = await resolve_entity_id(client, active_project.external_id, entity.permalink)
                        url = f"/v2/projects/{active_project.external_id}/knowledge/entities/{entity_id}"
                        response = await call_put(client, url, json=entity.model_dump())
                        result = EntityResponse.model_validate(response.json())
                        action = "Updated"
                    except Exception as update_error:  # pragma: no cover
                        # Re-raise the original error if update also fails
                        raise e from update_error  # pragma: no cover
                else:
                    # Re-raise if it's not a conflict error
                    raise  # pragma: no cover
            summary = [
                f"# {action} note",
                f"project: {active_project.name}",
                f"file_path: {result.file_path}",
                f"permalink: {result.permalink}",
                f"checksum: {result.checksum[:8] if result.checksum else 'unknown'}",
            ]
    
            # Count observations by category
            categories = {}
            if result.observations:
                for obs in result.observations:
                    categories[obs.category] = categories.get(obs.category, 0) + 1
    
                summary.append("\n## Observations")
                for category, count in sorted(categories.items()):
                    summary.append(f"- {category}: {count}")
    
            # Count resolved/unresolved relations
            unresolved = 0
            resolved = 0
            if result.relations:
                unresolved = sum(1 for r in result.relations if not r.to_id)
                resolved = len(result.relations) - unresolved
    
                summary.append("\n## Relations")
                summary.append(f"- Resolved: {resolved}")
                if unresolved:
                    summary.append(f"- Unresolved: {unresolved}")
                    summary.append(
                        "\nNote: Unresolved relations point to entities that don't exist yet."
                    )
                    summary.append(
                        "They will be automatically resolved when target entities are created or during sync operations."
                    )
    
            if tag_list:
                summary.append(f"\n## Tags\n- {', '.join(tag_list)}")
    
            # Log the response with structured data
            logger.info(
                f"MCP tool response: tool=write_note project={active_project.name} action={action} permalink={result.permalink} observations_count={len(result.observations)} relations_count={len(result.relations)} resolved_relations={resolved} unresolved_relations={unresolved} status_code={response.status_code}"
            )
            result = "\n".join(summary)
            return add_project_metadata(result, active_project.name)
  • Import statement in tools/__init__.py that loads the write_note function, triggering its automatic registration with the MCP server via the @mcp.tool decorator.
    from basic_memory.mcp.tools.write_note import write_note
  • Imports helper utilities used by the write_note handler for API calls and entity resolution.
    from basic_memory.mcp.tools.utils import call_put, call_post, resolve_entity_id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool creates/updates notes and returns a markdown summary, but doesn't cover critical aspects like whether updates overwrite existing notes, authentication requirements, error conditions, or rate limits. This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that states the core function and return value. There's no unnecessary verbiage, though it could be slightly more comprehensive given the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 6 parameters, 0% schema coverage, no annotations, but with an output schema, the description is moderately complete. It covers the basic action and return format, but lacks parameter explanations and behavioral context that would be needed for optimal agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 6 parameters, the description provides no information about what the parameters mean or how they should be used. It doesn't mention any of the parameters (title, content, folder, project, tags, note_type) or their purposes, failing to compensate for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create or update') and resource ('a markdown note'), distinguishing it from sibling tools like 'delete_note' or 'edit_note'. However, it doesn't explicitly differentiate from 'edit_note' which might also update notes, making it slightly less specific than a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'edit_note' or 'create_memory_project'. It mentions the tool's function but lacks context about prerequisites, when-not scenarios, or comparisons with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/basicmachines-co/basic-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server