Skip to main content
Glama

Server Details

DrainBrain token safety, CORTEX trading signals, social trends. Zero Core Intel.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
meltingpixelsai/zero-core-intel
GitHub Stars
0
Server Listing
Harvey Intel

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of intelligence gathering: competitor tracking, market regime, social trends, trading signals, token risk analysis (full and preview), server health, and tool discovery. No overlapping functionality.

Naming Consistency4/5

Most tools follow a get_ or scan_ verb_noun pattern (e.g., get_competitor_intel, scan_token). The 'health' tool is a single word and deviates slightly, but overall convention is clear and consistent.

Tool Count5/5

With 8 tools, the server covers a broad range of intelligence domains without being overwhelming. Each tool contributes meaningfully to the server's purpose.

Completeness4/5

Core intelligence operations are covered: retrieval of competitor info, market regime, social trends, trading signals, and token risk analysis. A minor gap is the lack of ability to filter or search within specific domains, but the set is well-scoped for general intel queries.

Available Tools

8 tools
get_competitor_intelAInspect

Synthia competitor tracking - feature launches, pricing changes, and strategic moves.

ParametersJSON Schema
NameRequiredDescriptionDefault
competitorNoFilter by competitor name (partial match)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must bear the burden of behavioral disclosure. It mentions the type of information tracked but omits details on output format, recency, or potential limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently conveys the tool's purpose without extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description gives a general sense of the content but lacks details on output structure or behavior. For a simple tool with one optional parameter, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'competitor', which already has a clear description. The tool description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as tracking competitor intel on specific topics (feature launches, pricing changes, strategic moves), distinguishing it from sibling tools like get_market_regime or get_social_trends.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any exclusions or context for its appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_regimeAInspect

CORTEX market regime detection - HOT/NORMAL/COLD with graduation velocity and activity metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so description carries full burden. It discloses output format (regime categories and metrics), but does not mention side effects, auth needs, or rate limits. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no waste. Clearly states what the tool does and what it returns. Front-loaded with key output types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, description should explain return values more. It mentions 'graduation velocity and activity metrics' but does not define them. For a tool with no input, more clarity on output interpretation would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in input schema, so schema coverage is 100%. Baseline 4 applies for 0-param tools. Description adds no param info, which is acceptable as there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's purpose: market regime detection with outputs HOT/NORMAL/COLD plus graduation velocity and activity metrics. It is distinct from sibling tools like get_competitor_intel, get_social_trends, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description only lists outputs, without context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trading_signalsBInspect

CORTEX trading signals - AI-generated direction, confidence score, and win rate for Solana tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoFilter by specific token mint address
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose key behaviors. It only lists output types but omits details like whether the signal is real-time, historical, or requires authentication. No information on rate limits or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, highly concise, and front-loads the key identifier 'CORTEX trading signals'. However, it may be too brief, missing elaboration on context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and one optional parameter, the description adequately lists output types but lacks detail on output format (e.g., single object vs array, numeric ranges). Completeness is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (one parameter 'token' with description). The description adds no extra meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides AI-generated trading signals (direction, confidence, win rate) for Solana tokens. It distinguishes from sibling tools like 'get_competitor_intel' and 'get_market_regime' by specifying a unique focus on trading signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It fails to mention typical use cases, prerequisites, or scenarios where this tool is preferred over sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

healthAInspect

Check Harvey Intel server status, uptime, and payment network configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It indicates a read-only check but lacks details on side effects or error states. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no unnecessary words. Highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple health check tool with no parameters or output schema, the description sufficiently covers purpose and behavior. Complete within its scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters, so 100% coverage. The description adds no parameter info, which is acceptable as none exist. Baseline is 4 for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks server status, uptime, and payment network configuration. It uses specific verbs and resources, and is distinct from siblings like get_competitor_intel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for health monitoring, but does not provide explicit when-to-use or when-not-to-use guidance. No alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_toolsAInspect

List all available Harvey Intel tools with pricing and input requirements. Use this for discovery.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No behavioral quirks to disclose; the description accurately reflects a read-only, non-destructive discovery tool. No annotations provided, but the description is fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded action verb, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description fully conveys the tool's purpose and usage context. No further details needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description adds value by specifying the output includes pricing and input requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it lists all available Harvey Intel tools with pricing and input requirements, distinguishing it from sibling tools that provide specific data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this for discovery,' providing clear context for when to use, though it does not mention when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_tokenAInspect

Full DrainBrain risk analysis for a Solana token using a 5-model AI ensemble. Returns score 0-100, risk level, rug stage, honeypot detection, risk flags, and temporal prediction.

ParametersJSON Schema
NameRequiredDescriptionDefault
mintYesSolana token mint address (base58)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It describes the analysis and outputs but does not disclose behavioral traits such as side effects, rate limits, or performance characteristics. It is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that lists many outputs, which is dense but clear. It is fairly concise with no filler, though could be slightly restructured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains return values (score, risk level, etc.). However, missing context about error handling, performance, or cost. Still fairly complete for a complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description mentions 'Solana token mint address' but adds no new meaning beyond the schema's description. No extra parameter semantics provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a full risk analysis for a Solana token using a 5-model AI ensemble, and lists specific outputs (score, risk level, rug stage, etc.). It implicitly differentiates from the sibling 'scan_token_preview' by being labeled 'Full'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool vs. the sibling 'scan_token_preview', nor does it mention prerequisites or exclusions. Usage is implied for full analysis, but no guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_token_previewAInspect

Quick risk level check for a Solana token. Returns LOW/MEDIUM/HIGH/CRITICAL. Free preview - use scan_token for full analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
mintYesSolana token mint address (base58)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool is a free preview and returns a risk level, but lacks details on error handling or what constitutes a 'quick' check. Still, it adequately informs the agent for expected use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is just two sentences, no redundant information, and front-loads the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema), the description is almost complete. It could mention what happens if the mint is invalid, but the core functionality is well-covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'mint' is well-described as 'Solana token mint address (base58)'. The description adds no extra semantics beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Quick risk level check for a Solana token' and lists the possible return values (LOW/MEDIUM/HIGH/CRITICAL). It distinguishes itself from sibling tool 'scan_token' by noting this is a free preview, making purpose and scope clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It clearly indicates when to use this tool (for a quick check) and when to use the alternative ('use scan_token for full analysis'), providing explicit usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.