Skip to main content
Glama
feuerdev
by feuerdev

find

Search and retrieve Google Keep notes by matching a query against titles and text, returning results in JSON format with details like id, title, and labels.

Instructions

Find notes based on a search query.

Args:
    query (str, optional): A string to match against the title and text
    
Returns:
    str: JSON string containing the matching notes with their id, title, text, pinned status, color and labels

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function for the MCP tool named 'find'. It uses the Google Keep API to find notes matching the query and serializes them to JSON.
    @mcp.tool()
    def find(query="") -> str:
        """
        Find notes based on a search query.
        
        Args:
            query (str, optional): A string to match against the title and text
            
        Returns:
            str: JSON string containing the matching notes with their id, title, text, pinned status, color and labels
        """
        keep = get_client()
        notes = keep.find(query=query, archived=False, trashed=False)
        
        notes_data = [serialize_note(note) for note in notes]
        return json.dumps(notes_data)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some context by specifying that the search matches against 'title and text' and returns a JSON string with specific fields (id, title, text, pinned status, color, labels). However, it lacks details on permissions, rate limits, error handling, or whether the search is case-sensitive/fuzzy, which are important for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured: a clear purpose statement followed by dedicated 'Args' and 'Returns' sections. Each sentence adds value without redundancy, making it easy to scan and understand quickly. The formatting enhances readability without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 1 parameter), no annotations, and the presence of an output schema (implied by the Returns section), the description is fairly complete. It covers the purpose, parameter semantics, and return format. However, it lacks behavioral details like search scope or limitations, which slightly reduces completeness for a tool with no annotation support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics by explaining that the 'query' parameter is 'a string to match against the title and text' and is optional. This clarifies the parameter's purpose beyond the bare schema, though it doesn't detail format or examples. With 1 parameter and no schema descriptions, this is above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find notes based on a search query.' It specifies the verb ('find') and resource ('notes'), and distinguishes it from sibling tools like create_note, delete_note, and update_note by focusing on retrieval rather than mutation. However, it doesn't explicitly differentiate from potential search alternatives beyond the scope of the provided siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions searching notes but doesn't specify scenarios, prerequisites, or exclusions. For example, it doesn't indicate if this is the primary search method or if there are other ways to retrieve notes, leaving usage context implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/feuerdev/keep-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server