leadpipe-mcp
Server Quality Checklist
- Disambiguation5/5
Each tool has a distinct, non-overlapping purpose: configuration, single/batch ingestion, enrichment, scoring, search, export, and analytics. The single vs. batch ingestion split is clearly delineated, and scoring configuration is cleanly separated from lead scoring execution.
Naming Consistency4/5Six of eight tools follow a consistent 'lead_<verb>' pattern (ingest, enrich, export, score, search). Minor deviations exist: 'config_scoring' uses abbreviated verb form instead of 'configure_scoring', and 'pipeline_stats' uses a noun rather than verb pattern like 'get_pipeline_stats'.
Tool Count5/5Eight tools is well-scoped for a lead scoring pipeline, covering the complete lifecycle: configuration, ingestion (single/batch), data enrichment, AI scoring, search/filtering, multi-format export, and analytics. No redundant or extraneous tools.
Completeness4/5Covers the core lead scoring workflow comprehensively (ingest → enrich → score → export). Minor gaps exist: no explicit 'get lead by ID' (must use search) and no delete operation, though these may be intentional for a pipeline-focused tool that exports qualified leads to external CRMs.
Average 3.6/5 across 8 of 8 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 8 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses duplicate handling ('skipped (duplicates)') and return value structure ('count of ingested and skipped'), but fails to address idempotency, atomicity of batch operations, partial failure behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely compact with zero redundant words. Both sentences deliver essential information (operation type + batch limits, and return value semantics). However, given the tool's complexity, this brevity borders on under-specification rather than optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool handling complex nested objects with zero schema descriptions, no annotations, and no output schema, the description is insufficient. It lacks documentation of the lead object structure, validation rules, field semantics, and detailed error scenarios that would be necessary for correct agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate for the complex nested lead objects (10+ fields including required 'email', enum 'source', etc.). The description only mentions the batch size constraint ('1-100') but completely omits what constitutes a valid lead object, required fields, or the available data attributes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Add multiple leads') and scope ('at once (1-100)'), distinguishing it from the singular 'lead_ingest' sibling through the batch size specification. However, it does not explicitly name the sibling alternative or clarify when batching is preferred over single ingestion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage constraints through the '1-100' range and hints at behavior with 'Returns count...', but provides no explicit guidance on when to use this tool versus 'lead_ingest', nor does it mention prerequisites, rate limits, or error handling strategies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions pagination support (referencing limit/offset parameters), but fails to disclose whether the operation is read-only, what data structure is returned, default sorting behavior, or whether text queries support wildcards/fuzzy matching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences totaling 12 words. Information is front-loaded with the core action ('Search and filter leads') followed by filterable attributes and pagination capability. No redundant or wasted language is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters with rich enum values and no output schema, the description covers the basic filtering capabilities but leaves significant gaps. It omits return value structure, authentication requirements, rate limiting, and the business logic of lead statuses (e.g., the difference between 'enriched' and 'scored').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate significantly. The text successfully maps conceptual filters to parameters (e.g., 'score range' implies min_score/max_score usage, 'pagination' implies limit/offset). However, it lacks critical semantics: it doesn't clarify that all parameters are optional (0 required), explain the score scale/range, or describe whether tag filtering uses AND/OR logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches and filters leads, specifying the available filter dimensions (text query, status, score range, source, tags). It implicitly distinguishes from siblings like lead_ingest (creation) and lead_export (export), though it doesn't explicitly differentiate from lead_batch_ingest or pipeline_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like lead_export (which may also access lead data) or pipeline_stats. It omits prerequisites, permission requirements, or scenarios where this search is preferred over other access methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only states the basic operation and destinations. It fails to address critical integration behaviors such as whether exports are additive or destructive, authentication requirements for HubSpot/Pipedrive/Google Sheets, or whether the operation is synchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with the primary action front-loaded, and every clause provides necessary information about functionality or parameters. There is no redundant or wasted text, making it appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers all input parameters and destination options, providing a baseline understanding of the tool's inputs. However, given the external integrations involved and lack of output schema, it has significant gaps regarding return values, error handling, authentication requirements, and the specific behavior of the export operation (e.g., create vs. update).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description successfully identifies all three parameters by mentioning the destination targets (mapping to the enum values), lead IDs, and minimum score filtering. While it maps the parameters conceptually, it lacks deeper semantic details such as the format of lead IDs, the valid range for scores, or whether filters can be combined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Export leads') and enumerates the five specific destination platforms/formats, providing clear scope. While it differentiates from siblings like lead_ingest or lead_search through its specific destination list, it does not explicitly contrast with them or clarify when to choose this over lead_batch_ingest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the optional filtering capabilities ('Optionally filter by lead IDs or minimum score'), providing implied context for when to use specific parameters. However, it lacks explicit guidance on when to use this tool versus alternatives, prerequisites for external integrations (like authentication), or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. Adds valuable behavioral context: reveals data source (email domain lookup) and specific enrichment fields. However, omits critical safety traits: unclear if this mutates the lead record or returns data, no mention of rate limits, failure modes, or permissions required.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loaded with action, mechanism, and deliverables. Second sentence states the single requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool: describes the enrichment action and required input. However, given no annotations and no output schema, gaps remain regarding return format, success/failure behavior, and whether the operation is idempotent or destructive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (lead_id documented). Description reinforces this requirement ('Provide the lead ID') but adds no additional semantic depth regarding format, constraints, or sourcing of the ID beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Enrich' + resource 'lead' + exact data fields (industry, size, country, tech stack) + mechanism (using the email domain). Clearly distinguishes from sibling lead_ingest (creation) and lead_score (scoring) by specifying the company data enrichment function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies prerequisite by requiring lead_id ('Provide the lead ID'), suggesting prior ingestion needed, but fails to explicitly name lead_ingest as the prerequisite tool or clarify when to use this single-lead tool versus batch alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses duplicate rejection policy ('Duplicate emails are rejected'), but omits other critical mutation behaviors: success response format, whether operation is atomic, idempotency guarantees, or error handling beyond duplicates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. First sentence establishes action and target; second covers required fields, examples, and key constraint. Front-loaded with critical information (duplicate rejection).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters with nested objects and no output schema, description covers the critical path (email requirement, duplicate handling) but leaves significant gaps. No explanation of custom_fields structure, source enumeration, or return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates partially by identifying email as required and listing example optional fields (name, job title, company), but fails to explain 7 other parameters including complex nested 'custom_fields' object, 'tags' array, or 'source' enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Add') and resource ('new lead') with scope ('to the pipeline'). Singular 'lead' clearly distinguishes from sibling lead_batch_ingest, and 'Add' differentiates from lead_search or lead_export.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Identifies required vs optional fields ('email (required) and optional fields'), implying basic usage pattern. However, lacks explicit guidance on when to use lead_batch_ingest instead, or prerequisites like authentication needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and effectively discloses the mutating side effect (status update to qualified/disqualified), the scoring range (0-100), and the threshold logic (>=60). It also notes the data points considered (job title, company size, industry, custom rules), though it could clarify error handling or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core action (calculating the score) and its inputs, while the second discloses the critical side effect (status update), making every word earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's mutation behavior and lack of output schema, the description adequately covers the status update side effect and scoring logic. However, it omits what the tool returns (presumably the score value) and does not address error cases (e.g., invalid lead_id) or the relationship to the config_scoring sibling for custom rules.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single lead_id parameter, establishing a baseline of 3. The description mentions job title, company size, and industry as scoring inputs, which provides context about what the tool evaluates, though it does not clarify whether these are fetched from the lead record (implied) or additional parameters (which they are not).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates an AI-powered qualification score (0-100) and updates lead status based on thresholds. It identifies the specific resource (lead) and action (scoring/qualifying), though it does not explicitly differentiate from siblings like lead_enrich or config_scoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning the status update side effect (qualified/disqualified), suggesting it should be used when making qualification decisions. However, it lacks explicit guidance on when to use this versus lead_enrich (data augmentation) or config_scoring (rule setup), or prerequisites like lead existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the dual read/write nature and hints at mutable state ('update'), but lacks critical safety context: whether updates are atomic, if they trigger lead re-scoring, permission requirements, or whether changes are reversible. The phrase 'Pass fields to update' implies partial updates are supported but doesn't confirm the merge behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero redundancy. Front-loaded with the core purpose ('View or update'), followed immediately by the distinct usage patterns. Every word serves a specific instructional or definitional purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters including nested custom rule objects) and complete lack of schema descriptions or output schema, the description provides minimum viable guidance for basic usage. However, it omits the return value structure when viewing, validation rules (e.g., whether weights must sum to 1.0), and the behavioral impact of configuration changes on existing leads.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It maps parameters to functional groups ('weights, titles, industries, or custom rules'), which helps interpret the 10 parameters, but fails to explain critical schema constraints like the 0-1 range for weights, the enum values for company sizes (1-10, 11-50, etc.), or the complex nested structure of custom_rules with operators (regex, gt, lt) and point ranges (-50 to 50).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool manages 'lead scoring configuration' with dual capabilities to 'view or update'. It effectively distinguishes from sibling tools like lead_score (which scores individual leads) and lead_ingest (which adds leads) by specifying this is for configuration management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instructions for the two operational modes: 'Pass empty object to view current config' versus 'Pass fields to update'. While it clearly explains how to use the tool, it doesn't explicitly state when to prefer this over alternatives or warn about concurrent modification risks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses what data is accessed (pipeline data) and what computations are performed (breakdowns, averages, conversion rates), but omits operational details like real-time vs cached data, time range limitations, or read-only safety guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence structure with colon-separated list of metrics. Front-loaded with action verb 'Get'. Zero redundancy; every phrase specifies either the operation or a specific returned metric.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero input parameters and absence of output schema, description compensates by listing specific analytics returned. However, lacks structural details about output format (JSON object structure, nesting of breakdowns) that would fully complete the specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present per input schema, establishing baseline score of 4. No parameters require semantic explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'lead pipeline analytics' and enumerates exact metrics returned (total leads, status/source breakdown, average score, score distribution, conversion rates). Distinct from sibling operational tools like lead_ingest or lead_search by focusing on aggregate analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. However, the specific list of aggregate metrics (breakdowns, conversion rates) implies this is for analytics/reporting versus individual lead operations (lead_search) or data export (lead_export). Lacks explicit comparison to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/enzoemir1/leadpipe-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server