machfive
Server Details
Generate hyper-personalized cold email sequences via MachFive API.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Bluecraft-AI/machfive-mcp
- GitHub Stars
- 2
- Server Listing
- MachFive Cold Email
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose with no overlap: export_list retrieves completed results, generate_batch initiates batch processing, generate_sequence handles single leads synchronously, get_list_status checks status, list_campaigns lists campaigns, and list_lists lists batch jobs. The descriptions explicitly differentiate their roles, preventing misselection.
All tool names follow a consistent verb_noun pattern (e.g., export_list, generate_batch, list_campaigns) with clear, descriptive terms. There are no deviations in style or convention, making the set predictable and easy to understand.
With 6 tools, this server is well-scoped for email sequence generation and management. Each tool serves a specific function in the workflow, from setup (list_campaigns) to execution (generate_sequence/generate_batch) and retrieval (export_list/get_list_status), with no unnecessary redundancy.
The tool set covers the core lifecycle of email sequence generation: listing campaigns, generating sequences (both single and batch), checking status, and exporting results. A minor gap is the lack of tools for managing campaigns (e.g., create/update/delete campaigns), but agents can still perform essential workflows effectively.
Available Tools
6 toolsexport_listExport ListARead-onlyIdempotentInspect
Download the generated email sequences for a COMPLETED list.
Only call this AFTER get_list_status shows processing_status = 'completed'. If the list is not yet completed, you'll get a 409 error — poll first.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Output format: 'json' (structured data) or 'csv' (raw CSV for sending tools). | json |
| list_id | Yes | List UUID to export. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context beyond this: it discloses that the tool requires a completed list, warns about a 409 error if used prematurely, and mentions polling behavior. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by critical usage guidelines. Every sentence earns its place by providing essential information without redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (e.g., readOnlyHint, idempotentHint), and the presence of an output schema, the description is complete. It covers purpose, prerequisites, error conditions, and sibling tool integration, leaving no significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description does not add any parameter-specific details beyond what the schema provides, such as explaining 'list_id' further or clarifying 'format' usage. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Download') and resource ('generated email sequences for a COMPLETED list'), distinguishing it from siblings like 'generate_batch' or 'get_list_status'. It precisely defines what the tool does beyond just the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Only call this AFTER get_list_status shows processing_status = "completed"') and when not to ('If the list is not yet completed, you'll get a 409 error — poll first'). It names an alternative tool ('get_list_status') for checking readiness, providing clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_batchGenerate BatchAInspect
Submit multiple leads for batch email sequence generation (ASYNC).
Returns IMMEDIATELY with a list_id. Processing runs in the background. After calling this, poll get_list_status every 15-30 seconds until processing_status is 'completed' or 'failed', then call export_list.
You must have a campaign_id first. Call list_campaigns if you don't have one.
| Name | Required | Description | Default |
|---|---|---|---|
| list_name | No | Display name for this batch in MachFive UI. | |
| leads_json | Yes | JSON array of lead objects. Each MUST have "email". Optional: name, title, company, company_website, linkedin_url. Example: '[{"email":"jane@acme.com","name":"Jane Doe"}]' | |
| campaign_id | Yes | Campaign UUID from list_campaigns. | |
| email_count | No | Number of emails per lead, 1-5. | |
| approved_ctas | No | Comma-separated CTAs. Omit to use campaign defaults. | |
| campaign_angle | No | Additional context/angle for personalization. | |
| email_signature | No | Signature appended to each email. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations: it explains the asynchronous nature ('Returns IMMEDIATELY with a list_id. Processing runs in the background'), provides specific polling instructions ('poll get_list_status every 15-30 seconds'), and describes the required workflow. While annotations cover basic hints (readOnlyHint=false, destructiveHint=false), the description adds crucial operational details that aren't captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: it starts with the core purpose, immediately explains the async behavior, provides clear workflow steps, and ends with prerequisites. Every sentence serves a distinct purpose with zero wasted words, and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (async batch processing with multi-step workflow), the description provides complete contextual guidance. It explains the entire operational flow, references sibling tools appropriately, and since there's an output schema, it doesn't need to detail return values. The combination of description, annotations, and schema provides comprehensive coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all 7 parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema - it focuses on workflow rather than parameter details. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Submit multiple leads for batch email sequence generation') and resource ('leads'), with the ASYNC qualifier distinguishing it from immediate processing tools. It explicitly differentiates from siblings by mentioning get_list_status and export_list for follow-up steps, and list_campaigns for prerequisite data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: after obtaining a campaign_id (via list_campaigns), and it outlines the complete workflow including polling get_list_status and calling export_list. It clearly distinguishes from alternatives by specifying this is for batch processing with asynchronous behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_sequenceGenerate Email SequenceAInspect
Generate a personalized cold email sequence for ONE lead.
This is SYNCHRONOUS — the request takes 3-10 minutes because MachFive researches the prospect and crafts unique emails. Do NOT retry if it seems slow; wait for the response.
You must have a campaign_id first. Call list_campaigns if you don't have one. If the request times out, use the returned list_id with get_list_status and export_list to recover results.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Lead's full name (improves personalization). | |
| Yes | Lead's email address (REQUIRED). | ||
| title | No | Lead's job title (improves personalization). | |
| company | No | Lead's company name (improves personalization). | |
| list_name | No | Display name for this list in MachFive UI. | |
| campaign_id | Yes | Campaign UUID from list_campaigns. | |
| email_count | No | Number of emails in sequence, 1-5. | |
| linkedin_url | No | LinkedIn profile URL for deeper personalization. | |
| approved_ctas | No | Comma-separated CTAs, e.g. 'Direct Meeting CTA, Lead Magnet CTA'. Omit to use campaign defaults. | |
| campaign_angle | No | Additional context/angle for personalization. | |
| company_website | No | Company URL for AI research. | |
| email_signature | No | Signature appended to each email. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations. It discloses the synchronous nature and long execution time ('3-10 minutes'), explains that MachFive researches the prospect, warns against retrying ('Do NOT retry if it seems slow'), and provides recovery steps for timeouts. Annotations cover basic hints (e.g., not read-only, not destructive), but the description enriches this with practical operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, with the core purpose stated first. Each sentence adds critical information—synchronous behavior, prerequisites, timeout handling—without redundancy. It efficiently covers key usage scenarios in four concise sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters, synchronous long-running operation) and the presence of an output schema, the description is highly complete. It addresses execution behavior, prerequisites, error recovery, and sibling tool relationships, leaving no significant gaps for agent understanding. The output schema likely handles return values, so the description appropriately focuses on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 12 parameters. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't explain parameter interactions or usage nuances). However, it implicitly contextualizes parameters by mentioning 'personalization' and 'AI research,' which aligns with schema descriptions like 'improves personalization.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate a personalized cold email sequence for ONE lead.' It specifies the verb ('generate'), resource ('personalized cold email sequence'), and scope ('for ONE lead'), distinguishing it from sibling tools like generate_batch (which likely handles multiple leads).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when and how to use this tool. It states prerequisites ('You must have a campaign_id first. Call list_campaigns if you don't have one.'), mentions an alternative for timeout recovery ('use the returned list_id with get_list_status and export_list'), and implicitly contrasts with generate_batch by emphasizing 'ONE lead.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_list_statusGet List StatusARead-onlyIdempotentInspect
Check the processing status of a lead list.
Use this to POLL after calling generate_batch. Call every 15-30 seconds until processing_status is 'completed' or 'failed'. When completed, call export_list. When failed, submit a new batch.
| Name | Required | Description | Default |
|---|---|---|---|
| list_id | Yes | List UUID from generate_batch or generate_sequence response. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation with open-world semantics. The description adds valuable behavioral context beyond annotations: it specifies polling frequency (15-30 seconds), expected status values ('completed' or 'failed'), and next steps in the workflow, which are crucial for effective tool use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by concise usage instructions in bullet-like sentences. Every sentence adds critical information without redundancy, making it highly efficient and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (e.g., readOnlyHint, idempotentHint), and the presence of an output schema, the description is complete. It covers purpose, usage guidelines, and behavioral context without needing to explain return values, which are handled by the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'list_id' parameter fully documented. The description adds no additional parameter details beyond what the schema provides, such as format examples or constraints, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('check the processing status') and resource ('of a lead list'), distinguishing it from siblings like 'export_list' (which exports completed lists) and 'generate_batch' (which initiates processing). It precisely defines the tool's role in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('after calling generate_batch'), when to call it ('every 15-30 seconds'), and what to do based on outcomes ('when completed, call export_list; when failed, submit a new batch'). It provides clear alternatives and context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaignsList CampaignsARead-onlyIdempotentInspect
List campaigns in the user's MachFive workspace.
CALL THIS FIRST before generate_sequence or generate_batch — you need a campaign ID to generate emails. If the user hasn't specified a campaign, call this and ask them to pick one.
Returns JSON array of campaigns with id, name, and created_at. Use the 'id' field as campaign_id in generate calls.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Search string to filter campaigns by name (case-insensitive substring match). Leave empty to list all. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds valuable context beyond annotations: it explains the return format ('JSON array of campaigns with id, name, and created_at') and specifies how to use the output ('Use the 'id' field as campaign_id in generate calls'), which aids in tool chaining. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidelines and output details. Every sentence earns its place: the first states the action, the second provides critical usage context, and the third explains the return format and how to use it. No wasted words, and the structure is logical and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter), rich annotations (covering safety and behavior), and the presence of an output schema (implied by 'Returns JSON array'), the description is complete. It adds necessary context like usage sequencing and output usage, compensating well for any gaps, making it fully adequate for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'query' parameter. The description does not add any parameter-specific information beyond what the schema provides (e.g., it doesn't explain search behavior or format details). Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List campaigns') and resource ('in the user's MachFive workspace'), distinguishing it from siblings like 'list_lists' (which lists lists) or 'generate_sequence' (which generates emails). It explicitly identifies the target resource as campaigns, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'CALL THIS FIRST before generate_sequence or generate_batch — you need a campaign ID to generate emails. If the user hasn't specified a campaign, call this and ask them to pick one.' It names specific alternatives (generate_sequence, generate_batch) and explains the prerequisite role, offering clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_listsList Lead ListsARead-onlyIdempotentInspect
List lead lists (batch jobs) in the user's MachFive workspace.
Useful for browsing past batches, checking what's in progress, or finding a list_id to export. Results are ordered newest first.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return, 1-100. | |
| offset | No | Pagination offset. | |
| status | No | Filter by processing status: 'pending', 'processing', 'completed', or 'failed'. | |
| campaign_id | No | Filter by campaign UUID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds valuable context beyond annotations: it specifies that results are ordered newest first, which is not indicated in annotations or schema. However, it does not mention rate limits or authentication needs, but annotations provide a solid baseline.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second provides usage guidelines and behavioral context (ordering). Every sentence earns its place with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a read-only list operation with filtering and pagination), rich annotations (readOnlyHint, openWorldHint, etc.), 100% schema coverage, and the presence of an output schema, the description is complete enough. It covers purpose, usage, and key behavioral trait (ordering), without needing to explain return values due to the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the input schema (e.g., limit, offset, status, campaign_id). The description does not add any parameter-specific details beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('lead lists (batch jobs)'), specifies the scope ('in the user's MachFive workspace'), and distinguishes it from siblings by mentioning its use for browsing past batches, checking progress, or finding list IDs for export. This is specific and differentiates from tools like 'export_list' or 'get_list_status'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: 'Useful for browsing past batches, checking what's in progress, or finding a list_id to export.' It implies usage scenarios but does not explicitly state when not to use it or name alternatives among siblings, such as 'list_campaigns' for a different resource type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.