Skip to main content
Glama

search_imessages

Search message bodies with case-insensitive LIKE query. Optionally filter by date range (ISO8601), sender, and result limit.

Instructions

Case-insensitive LIKE search over message bodies. Dates are ISO8601.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
sinceNo
untilNo
from_contactNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function 'search_imessages' in db.py that executes the SQL query to search iMessage bodies. It performs a case-insensitive LIKE search on message text OR a case-sensitive scan of the attributedBody blob, with optional filters for date range (since/until) and contact. Returns a list of messages with metadata.
    def search_imessages(
        query: str,
        since: str | None = None,
        until: str | None = None,
        from_contact: str | None = None,
        limit: int = 25,
    ) -> list[dict[str, Any]]:
        if not query or not query.strip():
            raise ValueError("query cannot be empty")
        limit = max(1, min(int(limit), 200))
        with _open() as conn:
            # Search m.text (case-insensitive) OR the raw bytes of m.attributedBody
            # (case-sensitive — newer macOS stores text only in the NSKeyedArchive
            # blob, so the LIKE-on-text path would miss those rows).
            where = [
                "(m.text LIKE ? COLLATE NOCASE "
                "OR instr(m.attributedBody, CAST(? AS BLOB)) > 0)"
            ]
            params: list[Any] = [f"%{query}%", query.encode("utf-8")]
            if since:
                where.append("m.date >= ?")
                params.append(iso_to_apple_ns(since))
            if until:
                where.append("m.date <= ?")
                params.append(iso_to_apple_ns(until))
            join_handle = ""
            if from_contact:
                join_handle = "JOIN handle h ON h.ROWID = m.handle_id"
                where.append("LOWER(h.id) = LOWER(?)")
                params.append(normalize_handle(from_contact))
            sql = f"""
                SELECT m.ROWID AS message_id, m.date, m.is_from_me, m.text, m.attributedBody,
                       (SELECT h2.id FROM handle h2 WHERE h2.ROWID = m.handle_id) AS sender_handle,
                       (
                         SELECT cmj.chat_id FROM chat_message_join cmj
                         WHERE cmj.message_id = m.ROWID LIMIT 1
                       ) AS chat_id,
                       (
                         SELECT COALESCE(NULLIF(c.display_name, ''), c.chat_identifier)
                         FROM chat_message_join cmj
                         JOIN chat c ON c.ROWID = cmj.chat_id
                         WHERE cmj.message_id = m.ROWID LIMIT 1
                       ) AS chat_name
                FROM message m
                {join_handle}
                WHERE {' AND '.join(where)}
                ORDER BY m.date DESC
                LIMIT ?
            """
            params.append(limit)
            rows = conn.execute(sql, params).fetchall()
            return [
                {
                    "message_id": r["message_id"],
                    "chat_id": r["chat_id"],
                    "date": apple_ts_to_iso(r["date"]),
                    "is_from_me": bool(r["is_from_me"]),
                    "sender": None if r["is_from_me"] else r["sender_handle"],
                    "body": _extract_text(r),
                    "chat_name": r["chat_name"],
                }
                for r in rows
            ]
  • The MCP tool decorator and function signature in server.py that defines the schema/interface for 'search_imessages'. Parameters: query (str, required), since (str, optional), until (str, optional), from_contact (str, optional), limit (int, default 25). Docstring describes it as a case-insensitive LIKE search over message bodies with ISO8601 dates.
    @mcp.tool()
    def search_imessages(
        query: str,
        since: str | None = None,
        until: str | None = None,
        from_contact: str | None = None,
        limit: int = 25,
    ) -> list[dict[str, Any]]:
        """Case-insensitive LIKE search over message bodies. Dates are ISO8601."""
        return db.search_imessages(
            query=query, since=since, until=until, from_contact=from_contact, limit=limit
        )
  • The tool is registered with MCP via the @mcp.tool() decorator on line 47 of server.py. The 'mcp' object is an instance of FastMCP('imessage') created on line 10.
    @mcp.tool()
    def search_imessages(
        query: str,
        since: str | None = None,
        until: str | None = None,
        from_contact: str | None = None,
        limit: int = 25,
    ) -> list[dict[str, Any]]:
        """Case-insensitive LIKE search over message bodies. Dates are ISO8601."""
        return db.search_imessages(
            query=query, since=since, until=until, from_contact=from_contact, limit=limit
        )
  • Helper functions used by search_imessages: 'apple_ts_to_iso' (converts Apple timestamps to ISO8601 strings), 'iso_to_apple_ns' (converts ISO8601 strings to Apple nanosecond timestamps), and 'normalize_handle' (normalizes phone/email for comparison).
    """Apple-epoch conversions and handle normalization."""
    from __future__ import annotations
    
    from datetime import datetime, timezone
    
    APPLE_EPOCH_OFFSET = 978307200  # seconds from unix epoch to 2001-01-01 UTC
    
    
    def apple_ts_to_iso(apple_ts: int | None) -> str | None:
        """Convert Apple Core Data timestamp to ISO8601 UTC string.
    
        Newer macOS stores date as nanoseconds since 2001-01-01 UTC.
        Older rows stored plain seconds. Heuristic: values > 1e11 are nanoseconds.
        """
        if apple_ts is None or apple_ts == 0:
            return None
        if apple_ts > 10**11:
            unix_ts = apple_ts / 1_000_000_000 + APPLE_EPOCH_OFFSET
        else:
            unix_ts = apple_ts + APPLE_EPOCH_OFFSET
        return datetime.fromtimestamp(unix_ts, tz=timezone.utc).isoformat()
    
    
    def iso_to_apple_ns(iso_str: str) -> int:
        """Convert an ISO8601 string to Apple nanoseconds-since-2001-01-01."""
        dt = datetime.fromisoformat(iso_str.replace("Z", "+00:00"))
        if dt.tzinfo is None:
            dt = dt.replace(tzinfo=timezone.utc)
        unix_ts = dt.timestamp()
        return int((unix_ts - APPLE_EPOCH_OFFSET) * 1_000_000_000)
    
    
    def normalize_handle(value: str) -> str:
        """Trim whitespace. Keep + for phones, lowercase emails."""
        v = value.strip()
        if "@" in v:
            return v.lower()
        return v
  • Helper function '_extract_text' used in search_imessages to extract message body text, falling back to parsing the attributedBody NSKeyedArchive blob when the text column is empty.
    def _extract_text(row: sqlite3.Row) -> str | None:
        """Return message.text, falling back to a best-effort read of attributedBody.
    
        attributedBody is an NSKeyedArchive blob. We do not parse it fully; we scan
        for the literal NSString payload that most text messages embed so that reply
        messages / richer content on newer macOS still show something useful.
        """
        text = row["text"]
        if text:
            return text
        blob: bytes | None = row["attributedBody"] if "attributedBody" in row.keys() else None
        if not blob:
            return None
        # typedstream layout after the NSString class tag:
        #   ... NSString <class-ref bytes> '+' <length-prefix> <utf-8 bytes>
        # The '+' (0x2b) byte is typedstream's variable-length-field marker.
        idx = blob.find(b"NSString")
        if idx == -1:
            return None
        plus = blob.find(b"+", idx)
        if plus == -1 or plus + 1 >= len(blob):
            return None
        cursor = plus + 1
        length_byte = blob[cursor]
        cursor += 1
        if length_byte == 0x81 and cursor + 2 <= len(blob):
            length = int.from_bytes(blob[cursor : cursor + 2], "little")
            cursor += 2
        elif length_byte == 0x82 and cursor + 4 <= len(blob):
            length = int.from_bytes(blob[cursor : cursor + 4], "little")
            cursor += 4
        elif length_byte < 0x80:
            length = length_byte
        else:
            return None
        try:
            return blob[cursor : cursor + length].decode("utf-8", errors="replace")
        except Exception:
            return None
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It mentions case-insensitive LIKE search and ISO8601 dates but omits details like pagination, rate limits, whether results are limited to the user's account, or error conditions. This is insufficient for an agent to fully understand tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short, front-loaded sentences with no filler. The first sentence states the core operation, and the second adds relevant detail about date format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so return values are partially covered. However, the description lacks context on search scope (e.g., all messages or current account), limitations, or prerequisites. It is minimally adequate but not fully complete for an informed agent decision.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It clarifies that 'query' searches message bodies and that 'since'/'until' are ISO8601 dates, adding value beyond the schema. However, it does not explain 'from_contact' or 'limit', leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs a case-insensitive LIKE search over message bodies and specifies date format as ISO8601. This distinguishes it from siblings like 'get_chat_messages' (which likely retrieves messages by chat) and 'send_imessage' (which sends messages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a search tool but does not explicitly state when to use it versus alternatives like 'resolve_contact' or 'list_recent_chats'. No exclusions or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/camfortin/imessage-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server