Skip to main content
Glama
hald

Things MCP Server

by hald

search_todos

Locate specific tasks in the Things app by searching titles and notes using a query term, streamlining task retrieval and organization.

Instructions

Search todos by title or notes

Args: query: Search term to look for in todo titles and notes

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'search_todos' MCP tool. It is decorated with @mcp.tool, which registers it with the FastMCP server. The function takes a query string, searches Things todos using things.search(), formats the results with format_todo(), and returns a formatted string. The type annotations and docstring define the input/output schema.
    @mcp.tool
    async def search_todos(query: str) -> str:
        """Search todos by title or notes
        
        Args:
            query: Search term to look for in todo titles and notes
        """
        todos = things.search(query, include_items=True)
        if not todos:
            return f"No todos found matching '{query}'"
        
        formatted_todos = [format_todo(todo) for todo in todos]
        return "\n\n---\n\n".join(formatted_todos)
  • The @mcp.tool decorator registers the search_todos function as an MCP tool.
    @mcp.tool
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It mentions what fields are searched (title/notes) but doesn't cover important aspects like search behavior (exact match, partial, case sensitivity), result format, pagination, or error conditions. This leaves significant gaps for a search operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief with two sentences that directly address purpose and parameter meaning. The 'Args:' section is slightly redundant but adds clarity. No wasted words, though it could be more front-loaded by integrating the parameter explanation into the main sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), the description's main gaps are in behavioral transparency and usage guidelines. For a search tool with no annotations and multiple similar siblings, it should provide more context about search behavior and differentiation, making it minimally adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter: it explains that 'query' searches in 'todo titles and notes'. With 0% schema description coverage, this compensates somewhat, but doesn't provide details on query syntax, length limits, or special characters. The baseline is appropriate given the single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search todos by title or notes' specifies the verb (search), resource (todos), and scope (title/notes). However, it doesn't explicitly distinguish this tool from sibling search tools like 'search_advanced' or 'search_items', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple search-related siblings ('search_advanced', 'search_items'), there's no indication of what differentiates this tool or when it's preferred, leaving the agent to guess.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hald/things-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server