zero-core-tools
Server Details
Web scraping, code review, content gen, sentiment. Zero Core Tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- meltingpixelsai/harvey-tools
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 9 of 9 tools scored. Lowest: 3.1/5.
Each tool has a clearly distinct purpose: sentiment analysis, structured data extraction, content generation, health check, tool listing, code review, web scraping, screenshot, and web search. There is minor overlap between scrape_url and extract_structured_data, but the different outputs (raw text vs. structured JSON) make them easily distinguishable.
Most tool names follow the verb_noun snake_case pattern (e.g., analyze_sentiment, scrape_url). The only outlier is 'health', which is a single noun rather than a verb_noun like 'check_health'. This minor inconsistency slightly reduces coherence.
With 9 tools, the server is well-scoped for a general-purpose utility toolkit. Each tool serves a useful function without redundancy, and the count is within the ideal range (3-15) for clarity and manageability.
The tool set covers a broad range of common AI/automation tasks such as text analysis, web scraping, content generation, search, and code review. While some potential utilities (e.g., translation or file conversion) are missing, the lack of a specific domain makes the set feel reasonably complete for a general-purpose toolkit.
Available Tools
9 toolsanalyze_sentimentAInspect
Analyze sentiment of text with entity extraction, confidence scores, and key phrase identification. Returns positive/negative/neutral/mixed with detailed breakdown.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to analyze for sentiment |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the tool performs sentiment analysis plus additional extractions and returns a breakdown. However, it does not mention that it is read-only or any limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with key information front-loaded. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description adequately explains inputs and outputs. No missing critical details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and already describes the text parameter clearly. The description does not add additional meaning about the parameter itself beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Analyze sentiment of text' with specific outputs: entity extraction, confidence scores, key phrase identification, and sentiment labels. This distinguishes it from sibling tools like extract_structured_data or generate_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description does not mention when not to use it or suggest alternative tools for related tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_structured_dataAInspect
Scrape a URL then use AI to extract structured JSON data matching your schema description. Combines Playwright scraping with Grok LLM extraction.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to scrape | |
| schema_description | Yes | Description of the data to extract and desired JSON structure. Example: 'Extract all product names and prices as {products: [{name, price}]}' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the burden. It discloses use of Playwright and Grok LLM, but does not mention rate limits, required permissions, or potential issues like JavaScript rendering or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences succinctly convey the purpose and method with no extraneous information. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter tool with no output schema, the description provides sufficient context about the process (Playwright + Grok) and output structure via example. Lacks details on error handling but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds value by explaining the combination of scraping and LLM extraction, plus an example for schema_description, which clarifies usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it scrapes a URL and extracts structured JSON using AI. It uses specific verbs ('scrape', 'extract') and mentions the resource (URL) and output format, distinguishing it from siblings like scrape_url (raw content) and search_web.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for extracting structured data from URLs, but does not explicitly state when not to use it or compare to alternatives. No guidance on prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_contentBInspect
Generate high-quality written content. Supports blog posts, product descriptions, documentation, social posts, and emails. Customizable tone, length, and keywords.
| Name | Required | Description | Default |
|---|---|---|---|
| tone | No | Writing tone (default: professional) | |
| type | Yes | Content type | |
| topic | Yes | Topic or subject to write about | |
| length | No | Target length (default: medium) | |
| keywords | No | Comma-separated keywords to include |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It does not disclose any behavioral traits such as rate limits, pricing, output quality guarantees, or side effects (e.g., resource consumption).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no filler. Could be slightly improved by front-loading the most critical info, but overall very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and the description does not explain what the tool returns (e.g., generated text). For a content generation tool, this is a significant gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already documented. The description adds marginal value by mentioning 'customizable tone, length, and keywords,' but this is effectively redundant with the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: generate high-quality written content. It lists specific content types (blog posts, product descriptions, etc.) and customization options (tone, length, keywords), making it distinct from siblings like analyze_sentiment or search_web.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. The description does not mention prerequisites, contraindications, or context where a sibling tool might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthAInspect
Check Zero Core Tools server status, uptime, and payment network configuration.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must convey behavioral traits. It states what is checked (status, uptime, config) but does not disclose side effects, authentication needs, or rate limits. Since it's a simple read operation, this is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence of 8 words, front-loading the core purpose without any filler. Every word is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description adequately conveys what it does. It could mention the output format or typical response fields, but for a health endpoint, it is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description adds no parameter information because none is needed; the schema provides full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Check' and identifies the resource as 'Zero Core Tools server status, uptime, and payment network configuration'. This clearly distinguishes it from sibling tools which are about content analysis, generation, and web scraping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit instructions on when to use or avoid this tool are given. However, the purpose is self-evident for health checks, and no sibling tool overlaps, so a basic score of 3 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_toolsAInspect
List all available Zero Core Tools with pricing and input requirements. Use this for discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool lists tools with pricing and input requirements, implying a read-only operation. No side effects or auth needs are mentioned, but for a simple listing, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 16 words that front-loads the purpose and ends with a usage directive. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is complete. It specifies what the tool lists and its intended use, requiring no additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so no additional parameter information is needed. The description does not need to compensate, and schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'all available Zero Core Tools with pricing and input requirements', which is specific and distinguishes it from sibling tools that perform other actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The directive 'Use this for discovery' explicitly states when to use the tool. However, it does not mention when not to use it or provide alternatives, but given the context of sibling tools, the guidance is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
review_codeAInspect
AI-powered security and quality code review. Analyzes for vulnerabilities, anti-patterns, performance issues, and best practices. Returns issues with severity, suggestions, and an overall score.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Source code to review | |
| focus | No | Focus area: security, quality, performance, or all (default: all) | |
| language | No | Programming language (auto-detected if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries full burden. It mentions 'AI-powered' and describes return items (issues, severity, suggestions, overall score), but does not disclose behavioral traits like idempotency, permissions, or side effects. It is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with verb and resource, no wasted words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the input schema fully describes parameters and no output schema exists, the description adequately explains return structure (issues, severity, suggestions, score). No additional context is needed for a stateless analysis tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameter descriptions already convey the same information (e.g., 'Focus area: security, quality, performance, or all'). The description adds no new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs code review for security and quality, listing specific analysis areas (vulnerabilities, anti-patterns, performance, best practices). It is distinct from sibling tools like analyze_sentiment or extract_structured_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (for code review) but does not explicitly state when not to use or mention alternatives. Since siblings are unrelated, the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scrape_urlAInspect
Scrape any URL and return cleaned text content. Powered by Playwright headless browser. Returns title, content, word count.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to scrape | |
| max_length | No | Max content length in chars (default: 10000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
An empty annotations object means the description carries full transparency weight. It reveals the underlying technology (Playwright headless browser) and output structure (title, content, word count) beyond the schema. However, it doesn't disclose potential failure modes or limitations (e.g., requires internet, may fail on JavaScript-heavy pages without stating that).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. First sentence identifies the primary action and outcome; second sentence adds implementation detail and key output fields. Information is front-loaded and efficiently packaged.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has only 2 parameters, no output schema, and no annotations, the description adequately covers what the tool does and what it returns. It could mention that it handles JavaScript rendering (implied by Playwright) and the nature of cleaned text, but it is sufficient for an agent to understand the tool's capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, so both parameters are already well-described. The description adds no new semantic meaning to the parameters beyond the schema. It repeats the concept of scraping a URL, which is already obvious from the schema property descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (scrape) and resource (any URL) and the result (cleaned text content). It implicitly distinguishes from sibling tools like screenshot_url (screenshots) and search_web (searching) by focusing on text extraction from a given URL.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool over alternatives. For example, it doesn't mention not to use for images, or when to prefer search_web or analyze_sentiment. The context signals show sibling tools with different purposes, but the description provides no comparative advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screenshot_urlAInspect
Take a full-page screenshot of any URL. Returns base64-encoded PNG image.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to screenshot | |
| width | No | Viewport width in pixels (default: 1280) | |
| height | No | Viewport height in pixels (default: 720) | |
| full_page | No | Capture full page scroll height (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description adds some behavioral context ('full-page', 'returns base64') but omits details like headless browser usage, potential slow loading, or rate limits. It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, action and output clearly front-loaded. Every word is necessary; no fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers core purpose and output, but given no output schema, it could mention that the base64 image may need MIME type (e.g., data:image/png;base64). Parameter details are well-handled by schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters are documented. The description adds no additional meaning beyond the schema, meeting the baseline but not exceeding it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Take a screenshot), the resource (any URL), and the output format (base64-encoded PNG). This distinguishes it from siblings like scrape_url or extract_structured_data that target different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for full-page screenshots but provides no explicit guidance on when to use this tool versus alternatives (e.g., scrape_url for HTML content). No usage restrictions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_webAInspect
Search the web via Google and return organic results with titles, links, and snippets. Optionally returns answer box if available.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query | |
| num_results | No | Number of results to return (default: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It discloses that it searches via Google and returns organic results and optionally an answer box. However, it does not discuss limitations, rate limits, or any potential restrictions on usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main action, and includes the optional answer box feature. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with only two parameters. The description covers the core functionality and the optional answer box, which is sufficient given no output schema. It tells the agent what to expect: titles, links, snippets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for both parameters: 'Search query' and 'Number of results to return (default: 10)'. The description does not add meaning beyond the schema, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'web', specifying that it uses Google and returns organic results with titles, links, and snippets. It also mentions the optional answer box. This distinguishes it from sibling tools like analyze_sentiment or extract_structured_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for general web search but provides no explicit guidance on when to use this tool versus alternatives like analyze_sentiment or scrape_url. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!