iMessage MCP Server
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to clarify delivery confirmations, rate limits, thread creation behavior, or error scenarios beyond the basic 'Send' verb which merely restates the tool name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with clear purpose statement. The 'Args:' block is slightly unconventional for MCP descriptions but efficiently documents parameters without verbosity. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool with existing output schema (per context signals), but gaps remain regarding sibling differentiation ('send_group_message') and iMessage-specific behaviors (e.g., blue bubble failures, read receipts). Minimum viable but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates well for 0% schema description coverage by documenting both parameters. Provides valuable format example for recipient ('+15551234567') and clarifies text content. Does not cover constraints (e.g., character limits, supported media types) preventing a 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action ('Send') and resource ('iMessage') with specific target types (phone number or email). However, it does not explicitly distinguish from sibling tool 'send_group_message'—the singular 'a phone number' offers only implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus 'send_group_message' or other messaging siblings. No mention of prerequisites (e.g., contact requirements), platform restrictions (Apple ecosystem), or failure conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully indicates the data is ordered by recency ('recent') and that returned messages are previews only, not full content. However, it lacks disclosure on what 'recent' means (time window), whether the operation is read-only (implied but not stated), pagination behavior beyond the limit, or any rate limiting concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two distinct sections: a clear action statement followed by parameter documentation. The main sentence front-loads the core purpose. The 'Args:' format is slightly informal but functional. No redundant or filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter tool with an output schema, the description is minimally sufficient. It covers the core behavior and parameter semantics. However, given the lack of annotations and the potential ambiguity between this and get_messages, it should clarify the scope (conversations vs individual messages) and mention that it returns conversation metadata rather than full chat history.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (the schema contains only 'title' and 'default' but no 'description' field). The description compensates by explicitly documenting the limit parameter via the Args section: 'Max number of conversations to return (default 20)'. This provides clear semantics for the single optional parameter that the schema fails to describe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and identifies the resource as 'iMessage/SMS conversations'. It adds valuable scope qualifiers 'recent' and 'last message preview' that distinguish this from sibling tools like get_messages (which presumably returns full message content) and search_messages. It doesn't explicitly state the distinction from get_messages in the description text, but the focus on 'conversations' vs 'messages' provides implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_messages or search_messages. There is no mention of prerequisites, such as needing specific permissions for iMessage access, or when a user should prefer this conversation-level view over retrieving full message threads.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions case-insensitive matching, which is useful behavioral context. However, it lacks disclosure of other important traits such as whether the operation is read-only, whether it searches archived messages, or any rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently organized with a clear purpose statement followed by an 'Args:' section. There is no redundant language or filler. The formatting is slightly unconventional but highly scannable, with every line contributing necessary information not present in the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter structure and presence of an output schema (removing the need to describe return values), the description covers the essential scope and functionality. However, it could be improved by noting query constraints (e.g., minimum length) or explicitly confirming the read-only nature of the search operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (only titles provided). The description fully compensates by documenting both parameters: 'query' includes semantics (text to search for) and constraints (case-insensitive), while 'limit' includes purpose (max results) and default value (30).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Search across all iMessage conversations for messages containing a query string,' providing a specific verb (search), resource (messages), and scope (across all conversations). While it implicitly distinguishes from 'get_messages' by emphasizing global search functionality, it does not explicitly contrast with siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives like 'get_messages' (likely for retrieving specific conversation history) or 'list_conversations'. There are no explicit when-to-use or when-not-to-use conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Send' implies a write operation, there is no disclosure of failure modes (what if group doesn't exist?), side effects, rate limits, or whether the operation is idempotent. Description only states the happy path.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Brief and front-loaded with the action sentence. The 'Args:' section is slightly informal but necessary given empty schema descriptions. No redundant text, every line serves documentation purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter messaging tool with output schema available (return values need not be described). Missing behavioral context regarding group existence requirements and error cases prevents a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (titles only). Description compensates by clarifying 'group_name' is the display name (distinguishing from ID) and 'text' is the message content. Adds essential semantics missing from structured fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Send'), resource ('message'), and target ('named group chat'). Explicitly distinguishes from sibling tool 'send_message' by specifying group chat context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs 'send_message' (individual vs group), no prerequisites mentioned (e.g., whether group must exist first), and no error handling guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses the read-only nature ('Look up') and mentions output ('conversation stats'), but lacks safety context (permissions required, rate limits) or side-effect warnings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient two-part structure with front-loaded purpose and indented Args block. The Python-docstring style 'Args:' formatting is slightly informal but clear. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity: single parameter, output schema exists (so return values needn't be detailed). Acknowledges output type ('conversation stats'). Could mention error cases (contact not found) but sufficient for selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage. The Args section provides semantic meaning (identifier = phone OR email) and crucial format guidance (e.g. +15551234567) that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Look up') and resource ('contact/handle'), and distinguishes from siblings by specifying it returns 'conversation stats' rather than message content. Slightly awkward phrasing ('contact/handle') prevents a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus siblings like 'search_messages' or 'list_conversations'. No mention that this requires an exact identifier (phone/email) versus partial matching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable behavioral context: 'most recent first' ordering and default 50. However, it lacks explicit read-only safety confirmation, error handling details (what happens if chat_identifier not found), and pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by Args section. Information density is high with minimal waste. The Args format slightly deviates from natural language but remains scannable and functional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given the presence of an output schema (covering return values) and simple flat parameter structure. Both parameters are documented. Minor gap: could mention error cases (invalid identifier) or if additional pagination exists beyond the limit parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting both parameters: chat_identifier includes format examples (+15551234567) and allowed types (phone, email, group), while limit includes default value and semantics. This exceeds the burden for low-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' with resource 'messages' and scope 'from a specific conversation'. Implicitly distinguishes from sibling search_messages (specific vs search) and send_message (read vs write).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage via 'specific conversation' which suggests using this when the chat identifier is known, but lacks explicit guidance like 'Use search_messages to find messages across conversations' or prerequisites such as needing the chat_identifier from list_conversations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/viraatdas/imessage-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server