generect-mcp
Server Details
B2B lead generation and company search through Generect Live API for sales prospecting.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsgenerate_emailCInspect
Generate email by first/last name and domain via Generect Email Generator
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Company domain without protocol (e.g., generect.com) | |
| last_name | Yes | Last name of the person | |
| first_name | Yes | First name of the person | |
| timeout_ms | No | Request timeout in milliseconds |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries full disclosure burden. While it identifies the external service (Generect), it fails to clarify if emails are guesses/patterns or verified, whether the operation is idempotent, rate limits, or critical for a generation tool lacking output schema what data structure is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is appropriately front-loaded. Minor inefficiency in 'via Generect Email Generator' (repetition of 'email') and slightly awkward preposition 'by' instead of 'from' keep it from a 5, but generally tight and readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% input schema coverage, the input side is well handled. However, given no output schema and no annotations, the description should compensate by describing return values or success/failure modes. It mentions the external service but leaves critical gaps in explaining what the agent receives back.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully documents all four parameters including timeout_ms. The description reinforces the key inputs (first/last name, domain) but doesn't add semantic value beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates emails using first/last name and domain via a specific service (Generect). It distinguishes from siblings like search_leads and get_lead_by_url which retrieve existing data, while this creates new email addresses. Minor ambiguity around 'generate' (compose vs construct address) prevents a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus searching existing leads (search_leads) or retrieving verified records. No mention of prerequisites like valid domain format or when to use the optional timeout_ms parameter compared to default behavior.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lead_by_urlCInspect
Get Lead by LinkedIn URL
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | LinkedIn profile URL (e.g., https://www.linkedin.com/in/username/) | |
| posts | No | Include posts data | |
| comments | No | Include comments data | |
| timeout_ms | No | Request timeout in milliseconds | |
| inexact_company | No | Allow inexact company matching | |
| people_also_viewed | No | Include people also viewed |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what data constitutes a 'Lead', default inclusion behavior for the optional data flags (posts/comments), error handling for invalid URLs, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The four-word description is front-loaded and contains no fluff, but it is undersized for a 6-parameter tool with complex behavioral flags, leaving significant gaps that require elaboration on usage patterns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters controlling data inclusion and no output schema, the description lacks necessary context about return structure, default behaviors for optional fields, and how the boolean flags affect the response payload.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline score of 3. The description adds no supplemental context for the boolean flags (e.g., performance implications of including posts) or timeout parameter, but the schema adequately documents each parameter's basic purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get Lead by LinkedIn URL' is tautological, essentially restating the tool name. While it identifies the action (Get) and resource (Lead), it fails to specify the scope of data returned or distinguish this direct lookup from the sibling search_leads tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this specific lookup versus the sibling search_leads tool, nor are prerequisites (like URL format requirements) or optimal use cases mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthCInspect
Health check Generect API via a quick lead-by-link request
| Name | Required | Description | Default |
|---|---|---|---|
| url | No | LinkedIn profile URL to validate (defaults to a public profile) | |
| timeout_ms | No | Request timeout in milliseconds |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It mentions the request is 'quick' but fails to explain what constitutes success/failure, what the response contains (status code vs. lead data), or whether it consumes API quota. The mention of 'lead-by-link' creates ambiguity about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the primary purpose. It is appropriately brief, though the phrasing 'Health check... via' is slightly awkward. No wasted words, but could benefit from a second sentence explaining the output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a health check tool with no output schema or annotations, the description inadequately explains what the tool returns (health status object vs. actual lead data) or how to interpret the results. Given the sibling tool 'get_lead_by_url', the ambiguity about whether this returns lead data constitutes a significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the schema adequately documents both parameters. The description adds the phrase 'lead-by-link' which loosely connects to the 'url' parameter's purpose, but doesn't add syntax details, format constraints, or usage examples beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the action ('Health check') and target ('Generect API'), and mentions the mechanism ('lead-by-link request'). However, it doesn't clearly distinguish this from the sibling tool 'get_lead_by_url', leaving ambiguity about whether this returns actual lead data or just a status check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. It should specify to use this for API connectivity verification before calling other tools, and clarify if it should not be used for actual lead retrieval (which likely requires 'get_lead_by_url').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_companiesCInspect
Search for companies by ICP filters
| Name | Required | Description | Default |
|---|---|---|---|
| compact | No | Return compact summary instead of full JSON | |
| keywords | No | Keywords | |
| max_items | No | Maximum items to include in response (local trim) | |
| headcounts | No | Headcount ranges | |
| industries | No | Industries | |
| timeout_ms | No | Request timeout in milliseconds | |
| company_types | No | Company types | |
| get_max_companies | No | Get maximum companies | |
| fallback_from_leads | No | If no companies, derive from leads by keywords |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden but offers no behavioral details. It omits what happens when no results match, how the timeout interacts with the operation, what 'compact' mode returns, or whether the fallback mechanism is automatic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While not verbose, the five-word description is under-specified for a 9-parameter tool with complex interdependent logic. It front-loads the core action but wastes no words because it provides almost no actionable guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for the tool's complexity. With 9 parameters, no output schema, and behavioral flags like 'fallback_from_leads', the terse description leaves critical gaps regarding the company/lead relationship, output formats, and filtering precedence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'ICP filters' which loosely maps to the available filter parameters (headcounts, industries, company_types), but adds no syntax guidance, format examples, or semantic relationships between parameters (e.g., 'get_max_companies' vs 'max_items').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the action ('Search') and resource ('companies'), but uses domain jargon 'ICP filters' without explanation. It fails to differentiate from sibling 'search_leads' or clarify what constitutes an ICP filter in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus 'search_leads' or other alternatives. No mention of prerequisites, rate limits, or when the 'fallback_from_leads' option should be utilized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_leadsCInspect
Search for leads by ICP filters
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return | |
| offset | No | Offset for pagination | |
| compact | No | Return compact summary instead of full JSON | |
| industry | No | Industry filter (e.g., Technology, Healthcare) | |
| location | No | Location filter (e.g., San Francisco, New York) | |
| job_title | No | Job title filter (e.g., CEO, CTO, Engineer) | |
| max_items | No | Maximum items to include in response (local trim) | |
| company_id | No | LinkedIn company id | |
| timeout_ms | No | Request timeout in milliseconds | |
| company_link | No | LinkedIn company URL | |
| company_name | No | Company name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose: result pagination behavior, LinkedIn data source (evident only in schema parameter names), what 'compact' mode returns versus full JSON, or that max_items performs local trimming versus server-side limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (6 words) but misses opportunity to front-load critical constraints like 'all filters optional' or 'returns paginated results'. 'ICP' jargon reduces clarity without adding precision.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 11-parameter search tool with zero required parameters and no output schema, description lacks essential context: optional filter behavior, pagination strategy, data provenance, and result schema expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (all 11 params documented), establishing baseline 3. Description mentions 'ICP filters' abstractly but adds no semantic value beyond the schema's individual field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (search) and resource (leads) and implies filtering capability. Distinguishes from sibling get_lead_by_url (bulk/filtered vs. specific retrieval) and search_companies (leads vs. companies), though uses domain jargon 'ICP' without explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus get_lead_by_url (for specific URL lookups) or search_companies. Does not mention that all parameters are optional or suggest filter combinations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!