Genome Industrial Intelligence
Server Details
Industrial intelligence for AI agents. Conviction scores and diligence for 10,000+ industrials.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 9 of 9 tools scored.
Each tool targets a distinct type of information (natural language queries, company diligence, macro regime, investment signals, etc.) with no functional overlap, making it easy for an agent to select the correct tool.
Eight of nine tools follow a consistent 'get_' prefix for retrieving data, while 'ask_genome' uses a different verb ('ask'). This minor deviation is still clear but breaks uniformity.
Nine tools is well-scoped for an industrial intelligence server, covering key areas without being overwhelming or sparse.
The tool set comprehensively covers the domain: macro context, company fundamentals, supply chain, signals, screening, and natural language queries. No obvious gaps for a read-only intelligence service.
Available Tools
9 toolsask_genomeAInspect
Ask a natural language question and get an AI-synthesized answer grounded in live Genome data. Examples: 'Which industrials are best positioned right now?', 'What is HON's signal and why?', 'Is the macro regime favorable for HVAC names?'
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question in plain English | |
| context_tickers | No | Optional list of tickers to include as context |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description notes 'AI-synthesized' and 'grounded in live Genome data,' which conveys generative behavior and real-time grounding. With no annotations, it partially discloses behavioral traits but lacks specifics on latency, potential hallucination, or reliability.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus three examples, with no wasted words. It front-loads the core purpose and immediately provides illustrative queries.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema and no annotations, the description should hint at the return format. It says 'AI-synthesized answer' but doesn't specify if it returns text, structured data, or includes citations. Sibling tools likely have structured outputs, so this gap matters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so the description adds no new meaning beyond the schema's descriptions. The description does not elaborate on parameter constraints or formats beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts natural language questions and returns AI-synthesized answers grounded in Genome data. The examples illustrate the scope, and the verb 'ask' distinguishes it from sibling tools that retrieve specific data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The examples provide clear use cases (open-ended questions about industrials, signals, macro regimes), implying it is for synthesis rather than simple lookups. However, it does not explicitly say when not to use it or contrast with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_diligenceAInspect
Get a full industrial operational diligence brief for a company. Returns 6-section structured analysis covering operational genome, leadership, supply chain, financial signals, risk flags, and diligence priority questions.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker (e.g. HON, CAT, ROK) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description fails to disclose behavioral traits such as read-only nature, data freshness, or any side effects. It only describes the output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the purpose, and efficiently lists the output sections without extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one parameter and no output schema, the description adequately covers purpose and output structure. It could mention potential limitations or data scope but is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear description for 'ticker'. The tool description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'full industrial operational diligence brief' and lists the 6 specific sections, making its purpose distinct from siblings like 'get_ecl' or 'get_signal'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a comprehensive diligence brief is needed but does not provide explicit guidance on when to use this tool versus its siblings, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eclAInspect
Get the current External Constraint Layer (ECL) macro regime and all 7 module scores: CPI (commodity), EPI (energy), LPI (labor), DCSI (demand/channel), DLPS (decision latency), ROC (regime multiplier), DCS (data confidence).
| Name | Required | Description | Default |
|---|---|---|---|
| industry | No | Optional industry for sector-specific ECL weights |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It only lists what is returned without any details on data freshness, computational cost, or side effects. For a simple getter, this is minimal and lacks richer context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that immediately states the purpose and lists all components. It is concise with no wasted words, earning a high score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description enumerates all 7 module scores returned, providing sufficient detail for a simple getter. The single optional parameter is explained. This covers the essential context for the tool's use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% as the only parameter 'industry' is described. The description adds that it is optional and for sector-specific weights, which matches the schema description. It does not provide additional meaning beyond the schema, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'current External Constraint Layer (ECL) macro regime and all 7 module scores', listing each module. It is specific and distinguishes from sibling tools like 'get_regime' by being explicitly about ECL and its components.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving ECL scores but does not provide explicit guidance on when to use this tool over alternatives like 'get_regime' or 'get_ecl'. No when-not-to-use or alternative conditions are mentioned, only that the industry parameter is optional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_horizonsBInspect
Get 3M, 6M, and 12M multi-horizon investment signals for a ticker with divergence patterns.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It only states the tool retrieves signals (implying read-only), but does not mention data freshness, pagination, rate limits, or whether it requires authentication. For a simple read tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys all key information without redundancy. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and no output schema, the description is minimal. It leaves out what 'signals' or 'divergence patterns' specifically are, and does not hint at return format or structure. For a financial tool, additional context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'ticker' is described in schema as 'Stock ticker'. The description adds value by specifying the output includes 3M, 6M, 12M horizons and divergence patterns, enriching the parameter's meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns multi-horizon investment signals (3M, 6M, 12M) with divergence patterns, using a specific verb 'get' and resource 'signals'. It distinguishes from siblings like 'get_signal' by specifying horizons, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'get_signal' or 'get_screener'. The description lacks context for prerequisites, exclusions, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pfslAInspect
Get Pre-Financial Signal Layer (PFSL) scores from SEC EDGAR financials: operational stress, demand signals, capex activity, narrative shift, labor/talent, and Reddit sentiment blend.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker | |
| industry | No | Optional industry hint |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It lists behavioral traits (e.g., includes Reddit sentiment blend) but does not explicitly state read-only nature, auth needs, or side effects, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that front-loads purpose and lists components efficiently, though slightly dense; no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, the description lists components but lacks details on return format, prerequisites, or pagination, making it moderately incomplete for a complex multi-score tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are documented. The description adds context (source is SEC EDGAR) but does not elaborate on parameter behavior or format beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves PFSL scores from SEC EDGAR financials and lists specific components (operational stress, demand signals, etc.), distinguishing it from sibling tools like get_ecl or get_signal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for financial analysis but provides no explicit guidance on when to use this tool versus siblings, nor any exclusions or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_regimeBInspect
Get a concise summary of the current macro market regime and its impact on conviction scaling.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It does not disclose behavioral traits such as data sources, update frequency, or side effects. The term 'concise summary' is vague, and 'conviction scaling' is undefined. Critical details about what gets returned are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the action and resource. Every word serves a purpose, with no filler or repetition. It is highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite low complexity (no params, no output schema), the description fails to provide usage guidelines or behavioral transparency. It is minimally viable for invoking the tool but leaves the agent without important context for selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% description coverage, so baseline is 3. The description adds no parameter-specific information, but none is needed. It provides context about the output, not parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a concise summary of the current macro market regime and its impact on conviction scaling. It uses a specific verb ('Get') and resource ('regime summary'), and the name differentiates it from sibling tools like get_signal or get_diligence. No tautology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., get_signal or ask_genome). The description only states what the tool does, leaving the agent to infer context. There are no explicit exclusions or usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_screenerBInspect
Screen all tracked industrial tickers by conviction, signal type, or archetype. Returns ranked list of top signals.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20) | |
| signal | No | Filter by signal: STRONG_BUY, BUY, WATCH, NEUTRAL, SELL, STRONG_SELL | |
| archetype | No | Filter by archetype: constrained, oscillatory, fragile, resilient, saturated, decoupled | |
| min_conviction | No | Minimum conviction threshold (default 0.25) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the burden. It implies a read operation (returns list) but does not detail authentication needs, rate limits, or what 'top signals' entails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tight sentences, front-loaded with action and result. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits return format, sorting order, default limit behavior, and the meaning of 'top signals'. With no output schema and no annotations, these gaps hinder agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 4 parameters. The tool description adds context by naming the filtering dimensions, but does not provide additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool screens all tracked industrial tickers by conviction, signal type, or archetype and returns a ranked list, which is specific and distinguishes it from sibling tools like get_signal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. Sibling tools exist but no conditions or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signalAInspect
Get the full investment signal for an industrial ticker: conviction score, archetype, trajectory, ECL/PFSL adjustments, options structure, and regime impact.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker (e.g. HON, CAT, GE) | |
| industry | No | Optional industry hint (e.g. hvac, auto, aerospace_defense) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states the tool 'gets' data (a read operation), but it does not disclose important behaviors such as required permissions, response format, error handling, or latency. The description is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence (20 words) that front-loads the key action and resource. Every word adds value, with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description lists the main components of the response (conviction score, archetype, trajectory, etc.), giving the agent a reasonable expectation of the return data. It is sufficiently complete for a tool with no output schema, though it could include more detail on the structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description does not add any additional meaning or constraints beyond what the schema provides (e.g., no elaboration on the 'industry' parameter or validation rules).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's purpose: retrieving the full investment signal for an industrial ticker. It lists specific components (conviction score, archetype, trajectory, etc.) and distinguishes it from sibling tools like get_ecl, get_pfsl, and get_regime, which are subsets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this is the comprehensive signal getter, but it provides no explicit guidance on when to use this tool versus its siblings (e.g., get_ecl, get_pfsl). It does not mention when not to use it or what prerequisites exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supply_chainAInspect
Get upstream supply chain risk for a ticker — which upstream industries are stressed and how much that drags the conviction score.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It explains the outcome (stressed industries and conviction drag) but omits details like authentication needs, rate limits, or whether it is read-only. The description provides some behavioral context but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the purpose and key details. It is concise without unnecessary words, though breaking into multiple sentences could improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should detail what is returned. It mentions 'stressed industries' and 'conviction score' but does not specify the format or data structure. For a simple single-parameter tool, it is adequate but leaves some ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'ticker', so the schema already defines the parameter. The description does not add additional semantics beyond reiterating the context of supply chain risk. Score is baseline 3 as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves upstream supply chain risk for a given ticker, specifying both the resource and the output (stressed industries and conviction score drag). It effectively distinguishes from sibling tools that serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the tool (when supply chain risk data for a ticker is needed) but does not explicitly state when not to use it or mention alternative tools among the siblings. The context is clear but lacks exclusions or comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!