zero-core-intel
Server Details
DrainBrain token safety, CORTEX trading signals, social trends. Zero Core Intel.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- meltingpixelsai/zero-core-intel
- GitHub Stars
- 0
- Server Listing
- Harvey Intel
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.
Each tool targets a distinct aspect of intelligence gathering: competitor tracking, market regime, social trends, trading signals, token risk analysis (full and preview), server health, and tool discovery. No overlapping functionality.
Most tools follow a get_ or scan_ verb_noun pattern (e.g., get_competitor_intel, scan_token). The 'health' tool is a single word and deviates slightly, but overall convention is clear and consistent.
With 8 tools, the server covers a broad range of intelligence domains without being overwhelming. Each tool contributes meaningfully to the server's purpose.
Core intelligence operations are covered: retrieval of competitor info, market regime, social trends, trading signals, and token risk analysis. A minor gap is the lack of ability to filter or search within specific domains, but the set is well-scoped for general intel queries.
Available Tools
8 toolsget_competitor_intelAInspect
Synthia competitor tracking - feature launches, pricing changes, and strategic moves.
| Name | Required | Description | Default |
|---|---|---|---|
| competitor | No | Filter by competitor name (partial match) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must bear the burden of behavioral disclosure. It mentions the type of information tracked but omits details on output format, recency, or potential limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that efficiently conveys the tool's purpose without extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description gives a general sense of the content but lacks details on output structure or behavior. For a simple tool with one optional parameter, it is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'competitor', which already has a clear description. The tool description adds no additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as tracking competitor intel on specific topics (feature launches, pricing changes, strategic moves), distinguishing it from sibling tools like get_market_regime or get_social_trends.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are there any exclusions or context for its appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_regimeAInspect
CORTEX market regime detection - HOT/NORMAL/COLD with graduation velocity and activity metrics.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so description carries full burden. It discloses output format (regime categories and metrics), but does not mention side effects, auth needs, or rate limits. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste. Clearly states what the tool does and what it returns. Front-loaded with key output types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without output schema, description should explain return values more. It mentions 'graduation velocity and activity metrics' but does not define them. For a tool with no input, more clarity on output interpretation would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in input schema, so schema coverage is 100%. Baseline 4 applies for 0-param tools. Description adds no param info, which is acceptable as there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool's purpose: market regime detection with outputs HOT/NORMAL/COLD plus graduation velocity and activity metrics. It is distinct from sibling tools like get_competitor_intel, get_social_trends, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description only lists outputs, without context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_social_trendsCInspect
Synthia social intelligence - trending terms, frequency, and sources from social media monitoring.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Lookback period in hours (default: 24, max: 168) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of disclosing behavioral traits, but it only states the output type. It does not mention whether the operation is read-only, side effects, or any authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the core functionality without extraneous words. It is appropriately front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description provides the basic purpose but is incomplete for an agent to fully understand the output format, limitations, or behavioral nuances. It adequately states what is returned but lacks detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'hours', so the schema fully documents it. The description could add meaning about how the parameter affects results, but it does not, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns trending terms, frequency, and sources from social media monitoring, indicating its purpose. However, it does not explicitly distinguish itself from siblings like get_competitor_intel or get_market_regime, which might have overlapping scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings or any prerequisites. It simply describes what it does without contextual usage hints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trading_signalsBInspect
CORTEX trading signals - AI-generated direction, confidence score, and win rate for Solana tokens.
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Filter by specific token mint address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose key behaviors. It only lists output types but omits details like whether the signal is real-time, historical, or requires authentication. No information on rate limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, highly concise, and front-loads the key identifier 'CORTEX trading signals'. However, it may be too brief, missing elaboration on context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and one optional parameter, the description adequately lists output types but lacks detail on output format (e.g., single object vs array, numeric ranges). Completeness is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (one parameter 'token' with description). The description adds no extra meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides AI-generated trading signals (direction, confidence, win rate) for Solana tokens. It distinguishes from sibling tools like 'get_competitor_intel' and 'get_market_regime' by specifying a unique focus on trading signals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It fails to mention typical use cases, prerequisites, or scenarios where this tool is preferred over sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthAInspect
Check Harvey Intel server status, uptime, and payment network configuration.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It indicates a read-only check but lacks details on side effects or error states. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no unnecessary words. Highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple health check tool with no parameters or output schema, the description sufficiently covers purpose and behavior. Complete within its scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so 100% coverage. The description adds no parameter info, which is acceptable as none exist. Baseline is 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks server status, uptime, and payment network configuration. It uses specific verbs and resources, and is distinct from siblings like get_competitor_intel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for health monitoring, but does not provide explicit when-to-use or when-not-to-use guidance. No alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_toolsAInspect
List all available Harvey Intel tools with pricing and input requirements. Use this for discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No behavioral quirks to disclose; the description accurately reflects a read-only, non-destructive discovery tool. No annotations provided, but the description is fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with front-loaded action verb, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description fully conveys the tool's purpose and usage context. No further details needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. The description adds value by specifying the output includes pricing and input requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists all available Harvey Intel tools with pricing and input requirements, distinguishing it from sibling tools that provide specific data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this for discovery,' providing clear context for when to use, though it does not mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_tokenAInspect
Full DrainBrain risk analysis for a Solana token using a 5-model AI ensemble. Returns score 0-100, risk level, rug stage, honeypot detection, risk flags, and temporal prediction.
| Name | Required | Description | Default |
|---|---|---|---|
| mint | Yes | Solana token mint address (base58) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It describes the analysis and outputs but does not disclose behavioral traits such as side effects, rate limits, or performance characteristics. It is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that lists many outputs, which is dense but clear. It is fairly concise with no filler, though could be slightly restructured for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return values (score, risk level, etc.). However, missing context about error handling, performance, or cost. Still fairly complete for a complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description mentions 'Solana token mint address' but adds no new meaning beyond the schema's description. No extra parameter semantics provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a full risk analysis for a Solana token using a 5-model AI ensemble, and lists specific outputs (score, risk level, rug stage, etc.). It implicitly differentiates from the sibling 'scan_token_preview' by being labeled 'Full'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool vs. the sibling 'scan_token_preview', nor does it mention prerequisites or exclusions. Usage is implied for full analysis, but no guidance on alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_token_previewAInspect
Quick risk level check for a Solana token. Returns LOW/MEDIUM/HIGH/CRITICAL. Free preview - use scan_token for full analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| mint | Yes | Solana token mint address (base58) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the tool is a free preview and returns a risk level, but lacks details on error handling or what constitutes a 'quick' check. Still, it adequately informs the agent for expected use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is just two sentences, no redundant information, and front-loads the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is almost complete. It could mention what happens if the mint is invalid, but the core functionality is well-covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'mint' is well-described as 'Solana token mint address (base58)'. The description adds no extra semantics beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Quick risk level check for a Solana token' and lists the possible return values (LOW/MEDIUM/HIGH/CRITICAL). It distinguishes itself from sibling tool 'scan_token' by noting this is a free preview, making purpose and scope clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It clearly indicates when to use this tool (for a quick check) and when to use the alternative ('use scan_token for full analysis'), providing explicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.