Skip to main content
Glama

Find Bear Notes

bear-search-notes
Read-onlyIdempotent

Search your Bear notes by text, tags, or date ranges, including content within images and PDFs via OCR, to quickly find relevant information.

Instructions

Find notes in your Bear library by searching text content, filtering by tags, or date ranges. Always searches within attached images and PDF files via OCR. Returns a list with titles, tags, and IDs - use "Open Bear Note" to read full content.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
termNoText to search for in note titles and content
tagNoTag to filter notes by (without # symbol)
limitNoMaximum number of results to return (default: 50)
createdAfterNoFilter notes created on or after this date. Supports: relative dates ("today", "yesterday", "last week", "start of last month"), ISO format (YYYY-MM-DD). Use "start of last month" for the beginning of the previous month.
createdBeforeNoFilter notes created on or before this date. Supports: relative dates ("today", "yesterday", "last week", "end of last month"), ISO format (YYYY-MM-DD). Use "end of last month" for the end of the previous month.
modifiedAfterNoFilter notes modified on or after this date. Supports: relative dates ("today", "yesterday", "last week", "start of last month"), ISO format (YYYY-MM-DD). Use "start of last month" for the beginning of the previous month.
modifiedBeforeNoFilter notes modified on or before this date. Supports: relative dates ("today", "yesterday", "last week", "end of last month"), ISO format (YYYY-MM-DD). Use "end of last month" for the end of the previous month.
pinnedNoSet to true to return only pinned notes: if combined with tag, will return pinned notes with that tag, otherwise only globally pinned notes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds valuable context beyond this: it specifies that searches include OCR of images and PDFs, and clarifies the return format ('list with titles, tags, and IDs'). This enhances transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by key behavioral details (OCR search) and usage guidance. Every sentence earns its place: the first defines the tool, the second adds critical functionality, and the third provides alternative tool direction. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no output schema) and rich annotations, the description is largely complete. It covers purpose, behavior (OCR), and usage guidelines. However, it lacks details on pagination or result ordering, which could be useful for a search tool, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters. The description mentions filtering by 'text content, tags, or date ranges,' which aligns with parameters like 'term,' 'tag,' and date fields, but adds no extra semantic details beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find notes in your Bear library by searching text content, filtering by tags, or date ranges.' It specifies the verb ('find'), resource ('notes'), and scope ('Bear library'), and distinguishes from siblings by mentioning it returns a list with titles, tags, and IDs, unlike tools like 'bear-open-note' which opens a single note.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: 'use "Open Bear Note" to read full content.' This clearly indicates that this tool is for searching and listing notes, while 'bear-open-note' is for accessing detailed content, helping the agent choose correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vasylenko/bear-notes-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server