mcp-explorium
Server Details
Access live company and contact data from Explorium's AgentSource B2B platform.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- explorium-ai/mcp-explorium
- GitHub Stars
- 21
- Server Listing
- Explorium AgentSource MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 12 of 12 tools scored. Lowest: 3.2/5.
Each tool has a clearly distinct purpose: autocomplete for filter values, match for ID resolution, fetch for filtered searches, enrich for data enrichment, events for event details, statistics for aggregates, and web-search for general queries. There is no ambiguity between tools.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., fetch-businesses, enrich-prospects, web-search). The naming is predictable and uniform across the entire set.
With 12 tools, the server is well-scoped for its purpose: covering business and prospect matching, searching, enrichment, events, statistics, autocomplete, and web search. No tool feels extraneous or missing.
The tool surface covers the full lifecycle: matching (match-business, match-prospects), searching with filters (fetch-businesses, fetch-prospects), enrichment (enrich-business, enrich-prospects), events (fetch-businesses-events, fetch-prospects-events), statistics (fetch-businesses-statistics, fetch-prospects-statistics), autocomplete for valid values, and web search for external info. No obvious gaps for the intended domain.
Available Tools
12 toolsautocompleteARead-onlyIdempotentInspect
Provides standardized parameter values that MUST be used in fetch-entities filter operations. This tool should be called FIRST before searches that require these specific field values.
Supported Fields (call this tool to get valid values):
naics_category: NAICS industry codeslinkedin_category: LinkedIn industry categoriescompany_tech_stack_tech: Specific technologiesjob_title: Job titles
Fields NOT Requiring Autocomplete: These fields have fixed enum values or use standard codes directly:
country_code: Use ISO Alpha-2 codes (e.g., "US", "IL")company_country_code: Use ISO Alpha-2 codes (e.g., "US", "IL")region_country_code: Use ISO 3166-2 codes (e.g., "US-NY", "IL-TA")company_region_country_code: Use ISO 3166-2 codes (e.g., "US-NY", "IL-TA")website_keywords: Use free-text keywords directlyEnum fields: See available values in fetch-entities description
Technical Requirement: Search operations using autocomplete-required filters will fail without valid values from this tool first.
Search Tips:
For SaaS companies: Use the keyword 'software'
Returns: List of valid, standardized values that must be used in search filter parameters
| Name | Required | Description | Default |
|---|---|---|---|
| field | Yes | The field to autocomplete. Use only fields listed here. Never use autocomplete for a field not included in this list. If a field is not listed, it either has a fixed set of allowed values (e.g., `NumberOfEmployeesRange`), or should be used directly as-is with no autocomplete. | |
| query | Yes | The query to autocomplete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, openWorld, idempotent, non-destructive. Description adds that it's a prerequisite and returns valid values, but doesn't clarify if the returned list is exhaustive or paginated, which is relevant given the openWorldHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with bullet points and sections, front-loaded with primary purpose. A few redundant phrases like 'standardized parameter values' repeated, but overall every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no output schema, description covers return type (list of values), prerequisite nature, and field constraints. Lacks mention of potential empty results or error handling, but sufficient given annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for both parameters. The description elaborates on field semantics (e.g., NAICS industry codes) and provides search tips, going beyond schema to clarify usage context for each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides standardized parameter values for fetch-entities filter operations, specifying it must be called first. Lists supported fields and distinguishes from sibling fetch/enrich tools, which are searches or enrichment, not parameter lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (before fetch operations on certain fields) and when not to use (fields with fixed values or free-text). Also notes technical failure if not used, providing strong usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
enrich-businessAInspect
Enriches business data enrichment calls. Requires business IDs as input.
Input Requirements:
business_ids: Array of Explorium business IDs (from match-business or fetch-businesses)
enrichments: enrichment types to retrieve
parameters: Optional settings for specific enrichments
IMPORTANT: For comprehensive company information requests, include ALL relevant enrichments in a single call (e.g., for "tell me about this company" use multiple enrichments: firmographics, technographics, funding-and-acquisitions, workforce-trends, linkedin-posts to provide complete intelligence)
Available Enrichments:
firmographics: Basic company info (name, description, website, location, industry, size, revenue)
technographics: Complete technology stack used by the business
company-ratings: Employee satisfaction and company culture ratings
financial-metrics: Financial data for public companies only (requires date parameter in ISO format: "2024-01-01T00:00")
funding-and-acquisitions: Funding history, investors, IPO, acquisitions
challenges: Business challenges and risks from SEC filings
competitive-landscape: Market position and competitors from SEC filings
strategic-insights: Strategic focus and value propositions from SEC filings
workforce-trends: Department composition and hiring trends
linkedin-posts: Company LinkedIn posts and engagement metrics
website-changes: Website content changes over time
website-keywords: Search for specific keywords on company websites (requires keywords parameter: array of terms - comma-separated terms within strings enable AND logic)
webstack: Website-specific technologies and web infrastructure components detected on company websites
company-hierarchies: Corporate hierarchy including parent company, ultimate parent, subsidiaries, and full organization tree (JSON format)
Parameters:
date: ISO 8601 format for financial-metrics (e.g., "2024-01-01T00:00")
keywords: Array of search terms for website-keywords
Alternative Tools:
For finding specific employees or company leadership information, use fetch-prospects with job_level filters (e.g., "c-suite" for c-level) instead of firmographics or strategic-insights enrichments
Returns: Combined structured data from all requested enrichments
Example:
{
"business_ids": ["biz_123", "biz_456"],
"enrichments": ["firmographics", "technographics", "funding-and-acquisitions"],
"parameters": null
}| Name | Required | Description | Default |
|---|---|---|---|
| parameters | No | ||
| enrichments | Yes | List of enrichment types to run in parallel | |
| business_ids | Yes | List of Explorium business IDs from match_businesses |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (non-readOnly, non-destructive, openWorld) are present. Description adds context that returns combined structured data and requires business IDs from match-business/fetch-businesses, but does not elaborate on potential side effects or permissions, though annotations already cover the safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections, bullet points, and bold text. However, it is somewhat lengthy; some details (e.g., enrichment list) could be more concise, but overall efficiently informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description only vaguely states 'Returns: Combined structured data.' For a tool with nested objects and multiple enrichment types, more detail on the return format or example output would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description explains each parameter: business_ids come from specific tools, enrichments are listed with definitions, and parameters object has sub-parameters (date, keywords) with format and usage details. This adds significant meaning beyond the schema, especially for the enrichments enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool enriches business data using business IDs and enrichment types. It lists specific enrichments and provides an example input, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit 'Alternative Tools' section advising when to use fetch-prospects instead. Also provides an 'IMPORTANT' note to combine enrichments for comprehensive requests, offering clear context on when to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
enrich-prospectsAInspect
Enriches prospect data using up to 3 parallel enrichment calls. Requires prospect IDs as input.
Input Requirements:
prospect_ids: Array of Explorium prospect IDs (from match-prospects or fetch-prospects)
enrichments: 1-3 enrichment types to retrieve
IMPORTANT: For comprehensive prospect information requests, include ALL relevant enrichments in a single call (e.g., for "tell me about this person" use multiple enrichments: profiles, contacts, linkedin-posts to provide complete intelligence)
Available Enrichments:
contacts: Professional and personal email addresses, phone numbers
linkedin-posts: Post content, engagement metrics, creation dates
profiles: Full name, demographics, location, role details, work experience, education
Returns: Combined structured data from all requested enrichments
Example:
{
"prospect_ids": ["pro_123", "pro_456"],
"enrichments": ["contacts", "profiles"]
}| Name | Required | Description | Default |
|---|---|---|---|
| enrichments | Yes | List of enrichment types to run in parallel | |
| prospect_ids | Yes | List of Explorium prospect IDs from match_prospects or fetch_prospects |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide safety profile (not read-only, not idempotent, not destructive). Description adds that it uses up to 3 parallel calls, but no further behavioral details (e.g., rate limits, mutation specifics). Acceptable given annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections and example, but includes redundancy (e.g., repeats input requirements) and has a contradiction between description and schema about enrichment limit. Could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema provided; description only says 'Combined structured data' without specifics. Lacks details on error handling, prerequisites beyond input IDs, or pagination/limitations. For a tool with 2 parameters, more completeness is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds context by stating provenance of prospect_ids (from match-prospects or fetch-prospects) and detailing enrichment options with descriptions. However, there is a contradiction: description says 'up to 3' while schema allows maxItems 10.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'enriches' and resource 'prospect data' with input prospect IDs. Distinguishes from siblings like fetch-prospects and match-prospects by focusing on enrichment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance to include all relevant enrichments in a single call for comprehensive information. However, does not discuss when to avoid using this tool or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-businessesAInspect
Searches for businesses using filter criteria. Supports combining multiple filters in a single request for optimal performance.
Filter Types:
Enum-Based Filters (use values directly from the lists below):
company_revenue: List[CompanyRevenue] - Revenue range in USDAvailable values: "0-500K", "500K-1M", "1M-5M", "5M-10M", "10M-25M", "25M-75M", "75M-200M", "200M-500M", "500M-1B", "1B-10B", "10B-100B", "100B-1T", "1T-10T", "10T+"
company_age: List[CompanyAge] - Company age in yearsAvailable values: "0-3", "3-6", "6-10", "10-20", "20+"
company_size: List[NumberOfEmployeesRange] - Employee count rangeAvailable values: "1-10", "11-50", "51-200", "201-500", "501-1000", "1001-5000", "5001-10000", "10001+"
number_of_locations: List[NumberOfLocations] - Number of officesAvailable values: "0-1", "2-5", "6-20", "21-50", "51-100", "101-1000", "1001+"
company_tech_stack_category: List[WebTechCategory] - Broader technology categoriesAvailable values:
"Healthcare And Life Science","Technology","Marketing","E-commerce","Devops And Development","Programming Languages And Frameworks",
"Testing And Qa", "Platform And Storage","Health Tech","Business Intelligence And Analytics","Operations Management","Customer Management",
"Hr","Industrial Engineering And Manufacturing","Product And Design","Sales","It Security","It Management","Other","Operations Software",
"Finance And Accounting","Computer Networks","Collaboration","Communications","Business","Productivity And Operations"
Autocomplete-Required Filters (standardized values MUST be obtained from autocomplete tool FIRST):
linkedin_category: LinkedIn industry categories - REQUIRES autocompletecompany_tech_stack_tech: Specific technologies - REQUIRES autocompletenaics_category: NAICS industry codes - REQUIRES autocompletebusiness_intent_topics: List[str] - Intent topic strings (e.g., "Security:Cloud Security", "HR:Employee Benefits"). Max 20 topics.city_region: City regions (USA only) - REQUIRES autocomplete
Event-Based Filters (use enum values directly, no autocomplete needed):
events: Object with "values" (array of event types from fixed enum) and "last_occurrence" (days: 30-90)This filter identifies businesses that have experienced specific events within a time window
After fetching businesses with event filters, use fetch-businesses-events to get detailed event information
BUYING INTENT (CRITICAL FOR SALES PROSPECTING)
When user wants to find potential customers/prospects for what they're selling
Use business_intent_topics filter (requires autocomplete)
ExampleKeywords: "selling to", "need", "looking for", "interested in buying", "prospects for"
Boolean Filters (true/false/null):
has_website: boolean or null - Set to true for businesses with websites, false for businesses without websites, or null to include allis_public_company: boolean or null - Set to true for publicly traded companies, false for private companies, or null to include all
Direct-Use Filters (use standard codes directly, no autocomplete needed):
country_code: ISO Alpha-2 country codes (e.g., "US", "IL")region_country_code: ISO 3166-2 region codes (e.g., "US-NY", "IL-TA")
Technical Constraints:
linkedin_categoryandnaics_categoryare mutually exclusiveregion_country_codeandcountry_codeare mutually exclusiveCRITICAL: Search requests using linkedin_category, company_tech_stack_tech, or naics_category filters will fail without standardized values from autocomplete tool obtained first
Returns: Business records including business IDs (usable with enrich-business or fetch-businesses-events)
Performance Note: All applicable filters can be combined in a single request
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to return | |
| size | No | The number of businesses to return | |
| filters | Yes | ||
| page_size | No | The number of businesses to return per page - recommended: 5 | |
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations set readOnlyHint=false (allowing mutations) but description only discusses reading/searching. It adds useful behavioral constraints like search failure without autocomplete and mutual exclusivity of filters, but does not disclose potential side effects, rate limits, or authorization needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with headings and bullet points, but verbose due to exhaustive listing of enum values and repeating some schema descriptions. Could be more concise while retaining clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of filters and no output schema, the description covers return types (business IDs for further enrichment), critical constraints, and performance notes. Could add pagination behavior but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Though schema coverage is high (80%), the description adds significant value by grouping filters into types (Enum-Based, Autocomplete-Required) and clarifying dependency on autocomplete for several parameters. It also explains the events filter structure beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Searches for businesses using filter criteria' and distinguishes it from siblings like enrich-business or fetch-businesses-events by focusing on search and filtering with specific filter types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides extensive usage guidance, including when to use autocomplete-required filters (e.g., 'MUST be obtained from autocomplete tool FIRST') and special cases like 'BUYING INTENT' for sales prospecting. Does not explicitly compare to sibling tools or state when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-businesses-eventsAInspect
Retrieves detailed business-related events from the Explorium API in bulk.
Use Cases:
Get detailed event information after filtering businesses using the events filter in fetch-businesses
Research a company's complete event history with specific event types and timestamps
Analyze timing and details of funding rounds, partnerships, office changes, etc.
Workflow:
Use fetch-businesses with events filter to find businesses that experienced specific events
Use this tool (fetch-businesses-events) to get detailed event information for those businesses
Note: For events related to role changes or people movements, use the prospects events tool instead.
| Name | Required | Description | Default |
|---|---|---|---|
| event_types | Yes | ||
| business_ids | Yes | ||
| timestamp_from | Yes | ISO format datetime string or date in format YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims the tool 'retrieves' data, implying it is read-only, but the annotation readOnlyHint is false, indicating potential side effects. This contradicts the description and misleads the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a main sentence, bullet points for use cases, and a numbered workflow. It is concise and front-loaded, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's place in the workflow and use cases, but it does not mention pagination, rate limits, or the structure of the response (no output schema). This leaves some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 33% schema description coverage, the description adds context by explaining that business_ids come from fetch-businesses results and that timestamp_from is a date filter. However, it does not elaborate on parameter formats or additional details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Retrieves detailed business-related events from the Explorium API in bulk.' It uses a specific verb and resource, and distinguishes from sibling tools like fetch-businesses (which is for filtering) and prospects events tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly outlines when to use this tool: after using fetch-businesses with events filter, and it warns to use prospects events tool for role changes. It also provides a step-by-step workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-businesses-statisticsBRead-onlyIdempotentInspect
Fetch aggregated insights into businesses by industry, revenue, employee count, and geographic distribution.
| Name | Required | Description | Default |
|---|---|---|---|
| filters | Yes | ||
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, clearly indicating a safe, non-destructive operation. The description adds minimal behavioral context beyond stating it is aggregating insights, which aligns with the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the core purpose efficiently. It is front-loaded and contains no redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex 'filters' parameter with many sub-properties and no output schema, the description is too brief. It does not explain the required nature of filters, how to use them effectively, or what the returned aggregated insights look like. The annotations cover safety but not functional completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%, meaning half of the sub-properties may lack descriptions. The tool-level description does not compensate by explaining parameter meanings; it only mentions the aggregated dimensions. For the undocumented parameters, the agent has no additional guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch', the resource 'aggregated insights', and the dimensions of aggregation (industry, revenue, employee count, geographic distribution). It effectively distinguishes this tool from siblings like 'fetch-businesses' (raw data) and 'fetch-businesses-events'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'fetch-businesses' or 'fetch-businesses-statistics' for prospects. It does not mention prerequisites, limitations, or that filters are required, leaving the agent to infer usage from the name and schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-prospectsAInspect
Searches for prospects (employees) using detailed filter criteria. Supports combining multiple filters in a single request for optimal performance.
Use Cases: Finding people in specific roles at companies:
"Who is the CTO at Google?"
"Find engineers at Microsoft"
"Show me sales directors at Apple"
Data Characteristics:
Provides verified, structured professional data
Returns prospects with detailed role information
Filter Types:
Enum-Based Filters (use values directly from the lists below):
job_level: List[JobLevel] - Seniority level of the jobAvailable values: "c-suite", "manager", "owner", "senior non-managerial", "partner", "freelancer", "junior", "director", "board member", "founder", "president", "senior manager", "advisor", "non-managerial", "vice president"
job_department: List[JobDepartment] - Department or function in the organizationAvailable values: "administration", "healthcare", "partnerships", "c-suite", "design", "human resources", "engineering", "education", "strategy", "product", "sales", "r&d", "retail", "customer success", "security", "public service", "creative", "it", "support", "marketing", "trade", "legal", "operations", "real estate", "procurement", "data", "manufacturing", "logistics", "finance"
company_size: List[NumberOfEmployeesRange] - Employee count rangeAvailable values: "1-10", "11-50", "51-200", "201-500", "501-1000", "1001-5000", "5001-10000", "10001+"
company_revenue: List[CompanyRevenue] - Revenue range in USDAvailable values: "0-500K", "500K-1M", "1M-5M", "5M-10M", "10M-25M", "25M-75M", "75M-200M", "200M-500M", "500M-1B", "1B-10B", "10B-100B", "100B-1T", "1T-10T", "10T+"
total_experience_months: RangeInt - Experience in months (e.g., {"gte": 12, "lte": 120})current_role_months: RangeInt - Months in current role (e.g., {"gte": 6, "lte": 24})
Boolean Filters (true/false/null):
has_email: literal true or null - Filter to include only prospects with verified email addresseshas_phone_number: literal true or null - Filter to include only prospects with available phone numbershas_website: boolean or null - Set to true for prospects from companies with websites, false for prospects from companies without websites, or null to include all
Autocomplete-Required Filters (standardized values MUST be obtained from autocomplete tool FIRST):
linkedin_category: LinkedIn industry categories - REQUIRES autocompletejob_title: Job titles - REQUIRES autocompletenaics_category: NAICS industry codes - REQUIRES autocompletebusiness_id: Business IDs - REQUIRES match-business or fetch-businesses
Direct-Use Filters (use standard codes directly, no autocomplete needed):
country_code: ISO Alpha-2 country codes (e.g., "US", "IL")company_country_code: ISO Alpha-2 country codes (e.g., "US", "IL")region_country_code: ISO 3166-2 region codes (e.g., "US-NY", "IL-TA")company_region_country_code: ISO 3166-2 region codes (e.g., "US-NY", "IL-TA")
Technical Constraints:
linkedin_categoryandnaics_categoryare mutually exclusiveregion_country_codeandcountry_codeare mutually exclusivecompany_region_country_codeandcompany_country_codeare mutually exclusiveCRITICAL: Search requests using linkedin_category, job_title, naics_category, or business_id filters will fail without standardized values obtained first from their respective tools
Returns: Prospect records including prospect IDs (usable with enrich-prospects or fetch-prospects-events)
Performance Note: All applicable filters can be combined in a single request
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | The page number to return | |
| size | No | The number of prospects to return | |
| filters | Yes | ||
| page_size | No | The number of prospects to return per page - recommended: 5 | |
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states it returns verified, structured professional data and notes performance optimization, but does not disclose behavioral traits beyond annotations (e.g., no mention of side effects, rate limits, or that openWorldHint=true implies outside data may be included). Annotations already indicate readOnlyHint=false and openWorldHint=true, but the description adds no clarification.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-organized with sections for use cases, filter types, technical constraints, and returns, and is front-loaded with purpose. However, it is verbose and includes some redundancy (e.g., listing enum values already in schema). Structured readability is good, but could be trimmed without losing essential info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return value (prospect records with IDs usable by other tools). It covers all filter types, constraints, and performance notes. For a tool with many parameters and complex dependencies, completeness is high, though a note on pagination or result count limits would add value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 80% schema coverage, the description adds significant meaning by categorizing filters (enum-based, boolean, autocomplete-required, direct-use), explaining mutual exclusions, and detailing dependencies on autocomplete tools. This goes beyond the schema's property descriptions, making it easier for agents to construct valid requests.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for prospects using detailed filter criteria, with concrete use cases like 'Who is the CTO at Google?' It distinguishes itself from siblings by mentioning that returned prospect IDs can be used with enrich-prospects or fetch-prospects-events, but does not explicitly differentiate from match-prospects or other search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool (e.g., combining filters for performance) and technical constraints like mutual exclusions and autocomplete requirements. However, it does not provide 'when not to use' guidance or compare directly to sibling tools like match-prospects or fetch-businesses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-prospects-eventsBInspect
Retrieves prospect-related events from the Explorium API in bulk. Use this when querying for prospect-related events about businesses: Example workflow: Fetch businesses > Fetch prospects > Fetch prospects events
| Name | Required | Description | Default |
|---|---|---|---|
| event_types | Yes | List of event types to fetch | |
| prospect_ids | Yes | ||
| timestamp_from | Yes | ISO format datetime string or date in format YYYY-MM-DD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description claims 'Retrieves' (read-only), but annotations set readOnlyHint=false, implying possible side effects. This is a contradiction. The description adds no behavioral details beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no wasted words. The purpose and workflow are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description does not explain return format, edge cases, or error handling. The contradiction with annotations also harms completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% with descriptions for event_types and timestamp_from, but prospect_ids lacks description. The description adds no parameter meaning beyond the schema, meeting baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves prospect-related events from the Explorium API in bulk, but does not explicitly differentiate from siblings like fetch-businesses-events or fetch-prospects-statistics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear workflow example ('Fetch businesses > Fetch prospects > Fetch prospects events') indicating when to use it, though it lacks explicit when-not-to-use guidance or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch-prospects-statisticsBRead-onlyIdempotentInspect
Fetch aggregated insights into prospects by job department, and geographic distribution.
Output Structure: Returns statistics with:
total_results: Total number of prospects matching the filters
job_department_breakdown: Count by department (Engineering, Sales, Marketing, etc.)
country_breakdown: Count by country location
| Name | Required | Description | Default |
|---|---|---|---|
| filters | Yes | ||
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, non-destructive behavior. The description adds output structure details (total_results, job_department_breakdown, country_breakdown) which provides some behavioral context but does not cover other traits like pagination or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences and a bullet list, but it lacks mention of the required 'filters' parameter, making it less helpful. It is front-loaded but could be more complete without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex input schema with many nested filters and no output schema, the description is incomplete. It does not explain that a filters object is required or that many filtering dimensions are available, leaving the agent to rely entirely on the schema for usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high (all filter parameters have descriptions), so the baseline is 3. The description does not add any parameter information beyond what the schema provides, merely mentioning the two breakdown dimensions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches aggregated insights by job department and geographic distribution, distinguishing it from sibling tools like fetch-prospects which returns individual prospects. The output structure further clarifies the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives (e.g., for individual prospects use fetch-prospects). The description assumes the agent will infer usage from context but provides no direct or indirect usage hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
match-businessARead-onlyIdempotentInspect
Retrieves Explorium business IDs that are REQUIRED as input parameters for all business enrichment operations (enrich-business, fetch-businesses-events).
Input Options:
Name only: {"name": "Google"}
Domain only: {"domain": "microsoft.com"}
Both (recommended for accuracy): {"name": "Amazon", "domain": "amazon.com"}
Returns: Business IDs that MUST be provided to enrichment tools
Technical Requirements:
enrich-business REQUIRES business IDs from this tool or fetch-businesses
fetch-businesses-events REQUIRES business IDs from this tool or fetch-businesses
Business IDs cannot be used interchangeably with prospect IDs
Use Cases: Questions about specific companies:
Company information (size, revenue, industry, location)
Executive teams or employee data
Technology stack analysis
Funding history or investors
Company events or changes
Workforce trends and hiring
Contact information for anyone at a company
Competitive analysis or market positioning
Example Queries:
"What is [Company]'s revenue?"
"Who is the CEO/CTO/CMO of [Company]?"
"What technologies does [Company] use?"
"How many employees does [Company] have?"
"What is [Company]'s funding history?"
"Find me contacts at [Company]"
Data Characteristics:
Provides verified, structured B2B data
Returns accurate firmographics
Enables access to comprehensive company intelligence
Delivers real-time employee data and contact information
Note: fetch-businesses returns include business IDs, so this tool is not needed after fetch-businesses
| Name | Required | Description | Default |
|---|---|---|---|
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
| businesses_to_match | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly=true, idempotent=true, etc. The description adds valuable context beyond annotations: business IDs are required for enrichment, cannot be used interchangeably with prospect IDs, and lists data characteristics (verified, structured B2B data). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively long but well-organized with labeled sections (Input Options, Returns, Technical Requirements, etc.). Each section serves a clear purpose. It is not overly verbose given the amount of useful context, but could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (2 parameters, no output schema), the description covers input options, technical dependencies on enrichment tools, use cases, and example queries. It explains that the return value is business IDs, compensating for the lack of an output schema. It is complete enough for an AI agent to understand usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 2 parameters with 50% description coverage (tool_reasoning and businesses_to_match with item property descriptions). The description adds meaning by showing example JSON inputs for name only, domain only, and both, and explains that the output (business IDs) are the key result. This compensates for the lack of full schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves business IDs required as input for business enrichment operations (enrich-business, fetch-businesses-events). It distinguishes from siblings by noting that fetch-businesses also returns business IDs, so this tool may be unnecessary after that. The specific verb 'retrieves' and resource 'Explorium business IDs' make the purpose precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool (to get business IDs for enrichment) and when not to (not needed after fetch-businesses). It provides input options (name only, domain only, or both) and recommends both for accuracy. It also includes extensive use cases and example queries, giving clear context for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
match-prospectsARead-onlyIdempotentInspect
Retrieves Explorium prospect IDs that are REQUIRED as input parameters for all prospect enrichment operations (enrich-prospects, fetch-prospects-events).
Input Requirements:
Email OR (full name + company name)
Optional: phone number, LinkedIn URL, business ID
Returns: Prospect IDs that MUST be provided to enrichment tools
Technical Requirements:
enrich-prospects REQUIRES prospect IDs from this tool or fetch-entities
fetch-prospects-events REQUIRES prospect IDs from this tool or fetch-entities
Prospect IDs cannot be used interchangeably with business IDs
Use Cases: Questions about specific people:
"Who is [Name] at [Company]?"
"Get me [Person's] contact information"
"Tell me about [Specific Person]"
Professional background of named individuals
Contact details, work history, social profiles of specific people
Data Characteristics:
Provides verified, current professional data
Returns structured contact info (emails, phones)
Gives comprehensive B2B intelligence
Note: fetch-entities returns include prospect IDs, so this tool is not needed after fetch-entities
| Name | Required | Description | Default |
|---|---|---|---|
| tool_reasoning | No | The original user query that prompted this workflow, in EXACT WORDS. Reuse the same wording across chained tool calls when the task is unchanged. Do not replace it with a per-step rationale or unrelated PII. | not provided |
| prospects_to_match | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Consistent with annotations (readOnlyHint, idempotentHint, etc.). Adds details about data characteristics (verified, current, structured) and input requirements beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections and bullet points, clear and informative. Slightly verbose but every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description fully explains what is returned (prospect IDs), input requirements, use cases, and relationships with other tools. Complete for a retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already has descriptions for both parameters. Description complements by clarifying the purpose of the output and use cases, but does not add significant new parameter-level detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves prospect IDs required for enrichment operations. Distinguishes from sibling tools like enrich-prospects and fetch-prospects-events by specifying its role as a prerequisite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (before enrichment operations) and when not needed (after fetch-entities). Provides specific use cases and clarifies that prospect IDs are not interchangeable with business IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web-searchARead-onlyIdempotentInspect
Perform web search using Explorium Search capabilities.
Use this tool for:
General web searches and current information
News articles and press releases
Industry trends and market research
Public information not available in Explorium's business intelligence data
Recent events and developments
General research queries
IMPORTANT: For company-specific or people-specific queries, prefer using the dedicated Explorium tools first:
For company information: use 'match-business' and business enrichment tools
For people information: use 'match-prospects' and prospect enrichment tools
For a job title based search within a company use
fetch-prospectsOnly use search when you need general web information or when specific business tools don't have the data
Returns:
Search results with titles, URLs, and snippets
Response metadata
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The search query string |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, and idempotentHint. The description adds value by specifying the return format (titles, URLs, snippets) and that it covers public information not in Explorium's BI data. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections and bullet points, front-loaded with the core purpose. Every sentence adds unique information, and there is no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single required parameter, no output schema, and thorough annotations, the description fully covers the tool's behavior, return values, and usage context. It is complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description for 'query' that already states it's a search query string. The description adds contextual guidance on query types but doesn't provide format, examples, or constraints beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Perform web search using Explorium Search capabilities' with a specific verb and resource. It lists concrete use cases like general web searches and news, distinguishing it from sibling tools like match-business or enrich-prospects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use (general web searches) and when-not-to-use (company/people queries) with named alternatives such as match-business, match-prospects, and fetch-prospects. This gives an agent clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.