Skip to main content
Glama

Server Details

B2B lead generation and company search through Generect Live API for sales prospecting.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
generate_emailCInspect

Generate email by first/last name and domain via Generect Email Generator

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesCompany domain without protocol (e.g., generect.com)
last_nameYesLast name of the person
first_nameYesFirst name of the person
timeout_msNoRequest timeout in milliseconds
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description carries full disclosure burden. While it identifies the external service (Generect), it fails to clarify if emails are guesses/patterns or verified, whether the operation is idempotent, rate limits, or critical for a generation tool lacking output schema what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is appropriately front-loaded. Minor inefficiency in 'via Generect Email Generator' (repetition of 'email') and slightly awkward preposition 'by' instead of 'from' keep it from a 5, but generally tight and readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% input schema coverage, the input side is well handled. However, given no output schema and no annotations, the description should compensate by describing return values or success/failure modes. It mentions the external service but leaves critical gaps in explaining what the agent receives back.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema fully documents all four parameters including timeout_ms. The description reinforces the key inputs (first/last name, domain) but doesn't add semantic value beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates emails using first/last name and domain via a specific service (Generect). It distinguishes from siblings like search_leads and get_lead_by_url which retrieve existing data, while this creates new email addresses. Minor ambiguity around 'generate' (compose vs construct address) prevents a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus searching existing leads (search_leads) or retrieving verified records. No mention of prerequisites like valid domain format or when to use the optional timeout_ms parameter compared to default behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_lead_by_urlCInspect

Get Lead by LinkedIn URL

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesLinkedIn profile URL (e.g., https://www.linkedin.com/in/username/)
postsNoInclude posts data
commentsNoInclude comments data
timeout_msNoRequest timeout in milliseconds
inexact_companyNoAllow inexact company matching
people_also_viewedNoInclude people also viewed
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what data constitutes a 'Lead', default inclusion behavior for the optional data flags (posts/comments), error handling for invalid URLs, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-word description is front-loaded and contains no fluff, but it is undersized for a 6-parameter tool with complex behavioral flags, leaving significant gaps that require elaboration on usage patterns.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters controlling data inclusion and no output schema, the description lacks necessary context about return structure, default behaviors for optional fields, and how the boolean flags affect the response payload.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline score of 3. The description adds no supplemental context for the boolean flags (e.g., performance implications of including posts) or timeout parameter, but the schema adequately documents each parameter's basic purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get Lead by LinkedIn URL' is tautological, essentially restating the tool name. While it identifies the action (Get) and resource (Lead), it fails to specify the scope of data returned or distinguish this direct lookup from the sibling search_leads tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this specific lookup versus the sibling search_leads tool, nor are prerequisites (like URL format requirements) or optimal use cases mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healthCInspect

Health check Generect API via a quick lead-by-link request

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoLinkedIn profile URL to validate (defaults to a public profile)
timeout_msNoRequest timeout in milliseconds
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It mentions the request is 'quick' but fails to explain what constitutes success/failure, what the response contains (status code vs. lead data), or whether it consumes API quota. The mention of 'lead-by-link' creates ambiguity about side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the primary purpose. It is appropriately brief, though the phrasing 'Health check... via' is slightly awkward. No wasted words, but could benefit from a second sentence explaining the output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a health check tool with no output schema or annotations, the description inadequately explains what the tool returns (health status object vs. actual lead data) or how to interpret the results. Given the sibling tool 'get_lead_by_url', the ambiguity about whether this returns lead data constitutes a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema adequately documents both parameters. The description adds the phrase 'lead-by-link' which loosely connects to the 'url' parameter's purpose, but doesn't add syntax details, format constraints, or usage examples beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the action ('Health check') and target ('Generect API'), and mentions the mechanism ('lead-by-link request'). However, it doesn't clearly distinguish this from the sibling tool 'get_lead_by_url', leaving ambiguity about whether this returns actual lead data or just a status check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. It should specify to use this for API connectivity verification before calling other tools, and clarify if it should not be used for actual lead retrieval (which likely requires 'get_lead_by_url').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesCInspect

Search for companies by ICP filters

ParametersJSON Schema
NameRequiredDescriptionDefault
compactNoReturn compact summary instead of full JSON
keywordsNoKeywords
max_itemsNoMaximum items to include in response (local trim)
headcountsNoHeadcount ranges
industriesNoIndustries
timeout_msNoRequest timeout in milliseconds
company_typesNoCompany types
get_max_companiesNoGet maximum companies
fallback_from_leadsNoIf no companies, derive from leads by keywords
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but offers no behavioral details. It omits what happens when no results match, how the timeout interacts with the operation, what 'compact' mode returns, or whether the fallback mechanism is automatic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While not verbose, the five-word description is under-specified for a 9-parameter tool with complex interdependent logic. It front-loads the core action but wastes no words because it provides almost no actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for the tool's complexity. With 9 parameters, no output schema, and behavioral flags like 'fallback_from_leads', the terse description leaves critical gaps regarding the company/lead relationship, output formats, and filtering precedence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'ICP filters' which loosely maps to the available filter parameters (headcounts, industries, company_types), but adds no syntax guidance, format examples, or semantic relationships between parameters (e.g., 'get_max_companies' vs 'max_items').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the action ('Search') and resource ('companies'), but uses domain jargon 'ICP filters' without explanation. It fails to differentiate from sibling 'search_leads' or clarify what constitutes an ICP filter in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus 'search_leads' or other alternatives. No mention of prerequisites, rate limits, or when the 'fallback_from_leads' option should be utilized.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_leadsCInspect

Search for leads by ICP filters

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return
offsetNoOffset for pagination
compactNoReturn compact summary instead of full JSON
industryNoIndustry filter (e.g., Technology, Healthcare)
locationNoLocation filter (e.g., San Francisco, New York)
job_titleNoJob title filter (e.g., CEO, CTO, Engineer)
max_itemsNoMaximum items to include in response (local trim)
company_idNoLinkedIn company id
timeout_msNoRequest timeout in milliseconds
company_linkNoLinkedIn company URL
company_nameNoCompany name
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Fails to disclose: result pagination behavior, LinkedIn data source (evident only in schema parameter names), what 'compact' mode returns versus full JSON, or that max_items performs local trimming versus server-side limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (6 words) but misses opportunity to front-load critical constraints like 'all filters optional' or 'returns paginated results'. 'ICP' jargon reduces clarity without adding precision.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 11-parameter search tool with zero required parameters and no output schema, description lacks essential context: optional filter behavior, pagination strategy, data provenance, and result schema expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (all 11 params documented), establishing baseline 3. Description mentions 'ICP filters' abstractly but adds no semantic value beyond the schema's individual field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (search) and resource (leads) and implies filtering capability. Distinguishes from sibling get_lead_by_url (bulk/filtered vs. specific retrieval) and search_companies (leads vs. companies), though uses domain jargon 'ICP' without explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus get_lead_by_url (for specific URL lookups) or search_companies. Does not mention that all parameters are optional or suggest filter combinations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources