Skip to main content
Glama
Ownership verified

Server Details

Document processing, data conversion, and web content APIs for AI agents. All tools are free via MCP. Also available as a paid x402 Agent API (Stellar XLM or Solana USDC, no API key required). Tools: extract text from PDFs, merge PDFs, generate QR codes, convert CSV to/from JSON, count words/stats, and fetch + clean any public URL with Mozilla Readability.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 27 of 27 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation3/5

Most tools have distinct purposes, but there is some overlap that could cause confusion. For example, 'extract_structured_data' and 'run_regex' both extract data from text, and 'read_url' and 'scrape_url_js' both fetch web content, though descriptions clarify the latter is for JavaScript-heavy pages. Tools like 'chunk_text' and 'estimate_tokens' serve different functions but both relate to text processing for LLMs, which might lead to misselection if not carefully read.

Naming Consistency4/5

The naming follows a consistent snake_case pattern throughout, with clear verb_noun structures (e.g., 'chunk_text', 'count_words', 'extract_pdf_text'). However, there are minor deviations like 'get_arc_trading_signal' which uses 'get' instead of a more descriptive verb, and 'private_execute_tool' which is less intuitive compared to others, slightly affecting consistency.

Tool Count3/5

With 27 tools, the count is on the higher side for a utility server, bordering on heavy. While many tools are specialized (e.g., for text processing, file conversion, web scraping), it may overwhelm users or agents trying to navigate the set. A more focused subset could improve usability without sacrificing functionality.

Completeness4/5

The tool set covers a wide range of utility functions including text processing, file conversion, web scraping, and data extraction, with good lifecycle coverage for common tasks. However, there are minor gaps, such as no direct tool for editing or transforming PDFs beyond merging, and the inclusion of niche tools like 'get_arc_trading_signal' and 'private_execute_tool' might not align perfectly with the core utility domain, slightly reducing coherence.

Available Tools

27 tools
chunk_textAInspect

Use this tool to split long text into smaller, overlapping chunks suitable for embedding, vector storage, or RAG pipelines. Triggers: 'chunk this document for RAG', 'split this into embeddings', 'break this into segments', 'prepare this text for a vector database'. Returns an array of chunks with index, text, character count, and estimated token count. Essential before embedding or storing text in a vector database.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to chunk.
overlapNoNumber of characters to overlap between consecutive chunks (default: 100). Helps preserve context across chunk boundaries.
strategyNoChunking strategy: 'paragraph' (split on blank lines, default), 'sentence' (split on sentence boundaries), 'fixed' (fixed character count).
chunkSizeNoMax characters per chunk for 'fixed' strategy, or target size hint for others (default: 1000).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It successfully documents the return structure ('array of chunks with index, text, character count, and estimated token count') since no output schema exists, and explains behavioral aspects like overlap preserving context. Could mention idempotency or error handling for a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose first, followed by triggers, return values, and necessity statement. The four trigger examples in quotes are slightly verbose but serve as valuable pattern-matching signals. Every sentence provides actionable information without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter text processing tool with 100% schema coverage but no output schema, the description is complete. It compensates for missing output schema by detailing return values, explains the chunking strategies available, and provides sufficient context for RAG preprocessing workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description adds significant semantic value by explaining why parameters matter for RAG workflows (overlap preserves context across boundaries) and linking strategies to the embedding use case, helping agents select appropriate parameter combinations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'split[s] long text into smaller, overlapping chunks' with specific target use cases (embedding, vector storage, RAG pipelines). It clearly distinguishes from siblings like count_words or extract_pdf_text by focusing on segmentation for retrieval workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides strong contextual guidance with specific trigger phrases ('chunk this document for RAG', etc.) and states it is 'Essential before embedding or storing text in a vector database.' Lacks explicit 'when not to use' exclusions, but the specific use case makes inappropriate usage unlikely.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

count_wordsAInspect

Use this tool when the user wants statistics about a piece of text, or when you need to verify content length/readability before submitting. Triggers: 'how many words is this?', 'count the words', 'check the readability of this', 'is this too long?', 'what's the reading time?'. Returns word count, character count, sentence count, paragraph count, reading time, speaking time, Flesch readability score, and top keywords. Also use proactively when producing long-form content to report its length.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyse.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden of behavioral disclosure. It successfully details the calculated outputs (word count, reading time, etc.), compensating for the missing output schema. However, it omits operational details such as handling of empty strings, maximum input size, or idempotency guarantees that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first front-loads the core functionality (counting metrics), while the second provides usage context. Every word earns its place with no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single string parameter, no nested objects) and lack of output schema, the description adequately covers the essential contract by listing all returned metrics. It appropriately compensates for missing structured metadata, though it could explicitly confirm the return structure format (e.g., 'returns an object containing').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (the single 'text' parameter is fully documented as 'The text to analyse'). The description references a 'block of text' which aligns with the schema but adds minimal additional semantic value regarding format requirements or constraints. Baseline 3 is appropriate given the schema carries the load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Count', 'returns') and clearly identifies the resource (text block) and scope (words, characters, sentences, paragraphs, reading time). It effectively distinguishes from extraction-focused siblings like extract_pdf_text or ocr_image by focusing on analysis of already-available text rather than retrieval from external sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('Useful for checking document stats, validating content length, or summarising text metrics') that establishes when to invoke the tool. However, it lacks explicit contrasts with siblings or guidance on workflow sequencing (e.g., using this after extract_docx_text or read_url).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_shareable_pasteAInspect

Use this tool when the user wants to share content as a link, or when your output is too long to share directly in chat. Triggers: 'share this as a link', 'give me a URL for this', 'create a paste', 'make this shareable', 'send this to someone'. Stores the content and returns a public URL (toolora.dev/p/[id]). Proactively use when you produce a long report, code file, or analysis that the user will want to send to someone else. Content expires after 7 days by default.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoOptional title displayed at the top of the paste page.
contentYesThe text content to store and share. Max 500KB.
languageNoOptional language hint for display (e.g. 'python', 'markdown', 'json').
expiresInHoursNoHow many hours until the paste expires (default: 168 = 7 days, max: 720 = 30 days).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully communicates public visibility ('anyone can visit'), data lifecycle ('expires after 7 days'), and output location. Missing details on exact return structure and whether deletion is possible, but covers critical safety/visibility concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste: first establishes function/output, second provides use case, third states expiration policy. Information is front-loaded and appropriately dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately compensates by specifying the URL format returned. Combined with complete parameter documentation in the schema and clear behavioral traits (public, expiring), the description provides sufficient context for invocation despite lacking annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, establishing a baseline of 3. The description references the 7-day default (aligning with expiresInHours) and implies content storage, but does not add significant semantic detail beyond what the schema already provides for the four parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Store any text content') and specific output ('public URL at toolora.dev/p/[id]'). It distinguishes effectively from sibling tools like csv_to_json or generate_pdf_from_text by specifying the paste/URL sharing paradigm.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases ('Perfect for sharing Claude's output — reports, code, analysis') and context ('without needing a file or account'). Lacks explicit 'when not to use' or direct sibling comparisons, but the use case guidance is strong and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

csv_to_jsonAInspect

Use this tool when the user pastes or provides CSV data and needs it as structured JSON, or wants to query/filter/analyse tabular data. Triggers: 'parse this CSV', 'convert this spreadsheet export to JSON', 'read this data file'. Returns a JSON array of objects with column headers as keys. Use this before analysing or transforming any CSV content.

ParametersJSON Schema
NameRequiredDescriptionDefault
csvYesThe CSV data as a string.
headersNoWhether the first row contains column names (default: true).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It effectively documents parsing behavior including header treatment defaults, and edge case handling (quoted fields, embedded commas, CRLF endings). Lacks detail on error handling for malformed CSV or size limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences with zero waste: purpose statement, default behavior/parameter note, parsing capabilities, and use case context. Information is front-loaded with the core conversion verb and format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion utility with complete schema coverage and no output schema, the description adequately explains the return structure ('JSON array of objects') and parsing logic. Minor gap regarding error handling or maximum input size constraints prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds semantic value by explaining that headers become the keys in the resulting JSON objects ('JSON array of objects' + header treatment context), enriching the agent's understanding beyond the raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Convert') and resources ('CSV string into a JSON array of objects') that clearly define the transformation. It distinguishes from sibling excel_to_json by explicitly specifying 'CSV string' versus Excel formats, and implies directionality opposite to json_to_csv.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use the tool ('processing spreadsheet exports, data files, and tabular data'), establishing appropriate use cases. However, it does not explicitly mention alternatives like excel_to_json for binary Excel files or json_to_csv for reverse conversion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

diff_textsAInspect

Use this tool to compare two pieces of text and identify exactly what changed between them. Triggers: 'what changed between these two versions?', 'compare these texts', 'show me the diff', 'what's different?', 'find the changes in this revision'. Returns added lines (with +), removed lines (with -), unchanged lines, and summary statistics. Use this when reviewing edits, comparing document versions, or verifying AI-generated changes.

ParametersJSON Schema
NameRequiredDescriptionDefault
text1YesThe original text (before).
text2YesThe new text (after).
contextNoNumber of unchanged lines to show around each change for context (default: 3). Set to 0 for changes only.
ignoreWhitespaceNoIgnore leading/trailing whitespace differences (default: false).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses output format specifics: 'added lines (with +), removed lines (with -), unchanged lines, and summary statistics'—critical behavioral context not found in the input schema. Does not mention idempotency or side effects, but implies read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences covering purpose, triggers/returns, and usage context. Front-loaded with core action. The trigger examples list is slightly bulky but functional. No redundant or wasted statements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing return values (line prefixes, statistics). Use cases are covered. For a 4-parameter utility with full schema coverage, this is complete though it could specify line-level vs character-level granularity explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description references 'two pieces of text' reinforcing text1/text2, but adds no semantic elaboration for context or ignoreWhitespace parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb+resource ('compare two pieces of text') and scope ('identify exactly what changed'). It clearly distinguishes from siblings like hash_text or count_words by focusing on differential analysis between two inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger phrases ('what changed between these two versions?', 'show me the diff') and clear when-to-use context ('when reviewing edits, comparing document versions, or verifying AI-generated changes'). Lacks explicit exclusions or alternative tools, but the positive guidance is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_tokensAInspect

Use this tool to estimate the token count of a text before sending it to an LLM. Triggers: 'how many tokens is this?', 'will this fit in context?', 'check if this is within the limit', 'token count for GPT-4'. Returns estimated token count, percentage of the model's context window used, and estimated API cost. Essential for context window management and cost planning.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to estimate tokens for.
modelNoTarget model (default: gpt-4o). Used to calculate context window usage and cost estimate.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return values (token count, percentage of context window, estimated API cost) and side effects (essential for context window management and cost planning). Could improve by noting estimation accuracy or methodology, but adequately covers what the tool produces.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose statement, trigger examples, return value disclosure, and value proposition. Front-loaded with the core action. Every sentence earns its place by adding distinct information not present in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with 100% schema coverage, the description fully compensates for missing output schema by explicitly documenting the three return values (count, percentage, cost). Complexity is low; description is appropriately comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds context that model selection affects 'context window usage and cost estimate' (reinforcing the schema description), but does not add syntax details or examples beyond what the enum already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('estimate') + resource ('token count') + scope ('before sending to an LLM'). Explicitly distinguishes from sibling tool 'count_words' by focusing on LLM tokens rather than word counts, and from 'chunk_text' by emphasizing estimation rather than splitting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger phrases ('how many tokens is this?', 'will this fit in context?') that signal when to invoke. Implicitly contrasts with 'count_words' by referencing GPT-4 and context windows specifically. Lacks explicit 'when not to use' exclusions, but the trigger examples provide strong positive guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

excel_to_jsonAInspect

Use this tool when the user shares an Excel or spreadsheet file and wants to read, analyse, query, or transform the data. Triggers: 'analyse this Excel file', 'read this spreadsheet', 'parse this .xlsx', 'what's in this workbook'. Accepts base64-encoded .xlsx, .xls, .ods, or .csv (filename required for format detection). Returns all sheets as JSON arrays of objects, with column headers as keys.

ParametersJSON Schema
NameRequiredDescriptionDefault
filenameYesFilename with extension (e.g., 'data.xlsx') — required for format detection.
fileBase64YesThe Excel or CSV file contents encoded as a base64 string.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the output structure ('map of sheet name to row array, plus total row count') and the parsing logic. However, it omits whether the operation is read-only (implied by 'parse' but not stated), error handling behavior, or memory/performance characteristics for large files.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two dense, information-rich sentences with zero redundancy. It front-loads the supported formats and core action, then efficiently describes the output structure and data transformation rules. Every clause serves a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by detailing the return value structure ('map of sheet name to row array'). With 100% input schema coverage and clear parsing behavior described, it covers the essential complexity of a multi-format file parser, though it could benefit from mentioning error conditions or size constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both filename and fileBase64 well-documented in the schema itself. The description aligns with the schema (mentioning base64 encoding and format detection) but does not add additional semantic context beyond what the structured schema already provides, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Parse an Excel file'), lists supported formats (.xlsx, .xls, .csv, .ods), and distinguishes from sibling csv_to_json by emphasizing Excel-specific handling and multi-sheet capability ('return all sheets'). It also clarifies the data transformation logic ('Column headers from the first row become object keys').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is the correct tool for Excel workbooks with multiple sheets (vs. the csv_to_json sibling), it provides no explicit guidance on when to choose this over csv_to_json for CSV files, nor does it mention prerequisites like base64 encoding requirements or file size limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_docx_textAInspect

Use this tool whenever the user shares a Word document (.docx) and wants to read, review, summarise, or analyse its content. Triggers: 'read this Word file', 'what does this doc say', 'summarise this document', 'extract text from this .docx'. Accepts base64-encoded .docx. Returns full text, paragraph count, word count, and character count. Works with Word, Google Docs exports, and LibreOffice files.

ParametersJSON Schema
NameRequiredDescriptionDefault
filenameNoOptional filename for format validation.
fileBase64YesThe .docx file contents encoded as a base64 string.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden and successfully discloses input encoding requirements, return value structure (text + 3 metrics), and compatibility. Minor gap: does not explicitly declare read-only/safe nature or error handling for corrupted files.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste: core function, input spec, return values, compatibility. Front-loaded with the essential verb-resource combination. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by detailing return contents (text, paragraph/word/character counts). Lacks explicit safety annotations (read-only hint) given no annotations present, but appropriate for a simple extraction utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (fileBase64 and filename both documented). Description reinforces the base64 requirement and .docx context, but adds no additional semantic detail (formats, validation rules) beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Extract' + resource 'plain text from Microsoft Word (.docx) file' clearly defines scope. Explicitly targets .docx format, distinguishing it from sibling extract_pdf_text and excel_to_json.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on input format (base64-encoded) and compatibility (Word, Google Docs, LibreOffice). Lacks explicit comparison to alternatives like extract_pdf_text or ocr_image, but the format restriction implicitly guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_pdf_textAInspect

Use this tool whenever the user shares, uploads, or references a PDF file and wants to read, summarise, search, or analyse its contents. Extracts all plain text from the PDF (base64-encoded). Returns text, page count, word count, and character count. Call this first before attempting any analysis of PDF content — e.g. 'summarise this PDF', 'what does this contract say', 'extract the data from this report'.

ParametersJSON Schema
NameRequiredDescriptionDefault
filenameNoOptional original filename, used for validation only.
fileBase64YesThe PDF file contents encoded as a base64 string.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses critical behavioral traits: input processing requirements ('Accepts the file as a base64-encoded string') and return value structure ('Returns the text content, page count, word count, and character count'). Minor gap: does not mention limitations like password-protected PDFs or scanned/image-based PDFs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with core action, followed by input specification, output specification, and use cases. Every sentence earns its place with high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description appropriately compensates by detailing the return values (text content plus three metrics). Sufficient for a 2-parameter extraction tool, though it could be strengthened by noting error conditions or distinguishing text-based vs. scanned PDF handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description confirms the base64 input requirement but does not add semantic nuance beyond the schema (e.g., validation rules for the filename parameter or file size constraints).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Extract') and resource ('plain text from a PDF file'), clearly distinguishing it from sibling tools like extract_docx_text and ocr_image. The scope ('all plain text') is precisely defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('Useful for reading documents, research papers, invoices'), establishing when to use the tool. However, it lacks explicit exclusions or comparisons to alternatives (e.g., it does not clarify when to use ocr_image for scanned PDFs instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_structured_dataAInspect

Use this tool to extract structured JSON data from any unstructured text — emails, reports, web pages, PDFs, meeting notes, etc. Triggers: 'extract the data from this', 'pull the fields out of this text', 'parse this into structured format', 'get me a JSON from this', 'extract names/dates/amounts from this'. Describe the structure you want in plain English (e.g. 'extract: company name, CEO, founding year, revenue'). Returns valid JSON matching your description.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe unstructured text to extract data from.
formatNoOutput format: 'json' for a single object or array (default), 'jsonl' for one JSON object per line.
schemaYesPlain-English description of what to extract, e.g. 'Extract: person name, email, phone number, company'. Or provide a JSON Schema as a string.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It successfully discloses output behavior ('Returns valid JSON'), but omits other critical behavioral traits: error handling (what happens if extraction fails), idempotency/safety (whether calling it multiple times is safe), and any rate limits or processing constraints for large inputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose declaration, triggers (front-loaded), usage instruction with example, and return value. The trigger list is lengthy but earns its place by preventing misuse. Only minor verbosity in listing input types that could be implied by 'any unstructured text'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex NLP extraction tool with 3 parameters and no output schema, the description adequately covers the return type ('valid JSON') and provides sufficient guidance on the critical schema parameter. It could be improved by mentioning error scenarios or size limits, but it meets the threshold for completeness given the schema's thoroughness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% coverage (baseline 3), the description adds significant value through concrete examples: it illustrates the schema parameter with 'extract: company name, CEO, founding year' and clarifies the text parameter applies to 'emails, reports, web pages, PDFs, meeting notes'. This helps the agent understand the flexible plain-English nature of the schema input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool 'extract[s] structured JSON data from any unstructured text' and lists specific input types (emails, PDFs, meeting notes). It clearly distinguishes from siblings like extract_pdf_text (which only extracts raw text) and csv_to_json (format conversion) by emphasizing the AI-powered extraction/structuring capability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Triggers' section listing specific user phrases ('extract the data from this', 'pull the fields out') that signal when to use this tool. This offers clear positive guidance for selection. However, it lacks explicit 'when not to use' guidance or named alternatives from the sibling list (e.g., when simple regex with run_regex might suffice instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_pdf_from_textAInspect

Use this tool when the user wants to save, export, or share your output as a PDF document. Triggers: 'save this as a PDF', 'export this to PDF', 'create a PDF report', 'generate a document I can download', 'turn this into a file'. Supports # headings, ## subheadings, - bullet lists, and plain paragraphs. Returns a base64-encoded PDF. Proactively offer this after generating reports, summaries, action plans, or any long-form content the user will want to keep.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoOptional document title shown at the top of the PDF.
authorNoOptional author name added to PDF metadata.
contentYesThe text content to render into a PDF. Supports # headings, ## subheadings, - bullet points, and plain paragraphs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully reveals the output format ('Returns the PDF as a base64-encoded string') and formatting capabilities (headings, bullets, paragraphs). Could mention size limits or error conditions, but covers the essential behavioral traits for invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) core function, (2) formatting capabilities, (3) output format, (4) use case. Every sentence earns its place. Information is front-loaded with the essential action before details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity and lack of output schema, the description is complete. It explains the return value (base64 string), documents all formatting capabilities, and provides clear sibling differentiation. No gaps remain that would prevent correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds meaningful context beyond the schema: it clarifies that title is 'shown at the top of the PDF' and reveals that author is 'added to PDF metadata' (technical detail absent from schema). It also elaborates on content formatting syntax ('lines starting with #').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Generate') and clearly identifies the resource (PDF document) and input type (plain text/markdown). It effectively distinguishes from siblings like extract_pdf_text (PDF→text) and merge_pdfs (combining existing PDFs) by emphasizing creation from text content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear when-to-use context ('Ideal for packaging Claude's output — reports, summaries, plans'), helping the agent identify appropriate scenarios. Lacks explicit when-not-to-use or named alternatives (e.g., not mentioning when to use merge_pdfs instead), but the use case guidance is strong enough for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_qr_codeAInspect

Use this tool whenever the user asks for a QR code or wants a URL/text to be scannable. Triggers: 'make a QR code for this link', 'create a scannable code', 'generate a QR for my website'. Accepts any text or URL (max 2953 chars). Returns a base64-encoded PNG image. Display the image inline after generating it.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNoImage width/height in pixels (64–2048, default 400).
textYesThe text or URL to encode in the QR code. Max 2953 characters.
errorCorrectionLevelNoError correction level: L (7%), M (15%, default), Q (25%), H (30%).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the critical behavioral trait of output format ('Returns a base64-encoded PNG'). It also notes configurability options. Does not disclose side effects, idempotency, or rate limits, but covers the essential output contract.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero redundancy: action definition, return format/features, and use cases. Every sentence earns its place with high information density and appropriate front-loading of the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool of low complexity (3 simple parameters, no nested objects) and no output schema, the description is complete. It compensates for the missing output schema by explicitly stating the return format (base64 PNG) and adequately covers the tool's scope without needing to explain return values further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description acknowledges 'custom size and error correction level' but does not add semantic information beyond what the schema already provides (pixel ranges, error correction percentages, character limits).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Generate a QR code image' and clarifies the input types ('any text or URL'). It clearly distinguishes this tool from siblings like create_shareable_paste or generate_pdf_from_text by specifying the QR code format and scannable output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context through concrete use cases ('shareable links, contact cards, payment addresses'), helping the agent understand when to invoke the tool. Lacks explicit exclusions or direct comparison to siblings, but the use cases provide sufficient implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_arc_trading_signalAInspect

Fetch a live Solana DEX divergence trading signal from Soliris Arc — the agent-to-agent data market built on Arc (Circle's L1 blockchain). Each signal costs $0.001 USDC paid automatically on-chain via the x402 protocol. Signals identify real-time arbitrage spreads across Raydium, Orca, Jupiter, and Meteora. This is the agentic economy in action: your AI pays another AI for data, settled in under 1 second, no humans in the loop. Use demo=true to get a sample signal without payment. For live signals the API returns a 402 with payment details. Powered by Soliris (soliris.pro).

ParametersJSON Schema
NameRequiredDescriptionDefault
demoNoIf true, returns a sample signal without requiring payment. Use this to explore the signal format before integrating payments.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: payment requirements ('costs $0.001 USDC paid automatically'), technical details ('settled in under 1 second'), and response handling ('API returns a 402 with payment details'). However, it lacks information about rate limits, authentication needs, or error scenarios beyond the 402 response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. Each sentence adds value: explaining the payment mechanism, data sources, agentic economy context, and parameter usage. Minor verbosity in promotional phrasing ('the agentic economy in action') slightly reduces efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial data with payment integration), no annotations, and no output schema, the description does well to cover purpose, usage, payment, and parameter semantics. However, it lacks details on output format, error handling beyond 402, and integration prerequisites, leaving some gaps for a fully informed agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds meaningful context for the single parameter: 'Use demo=true to get a sample signal without payment' explains the practical implication of the demo parameter beyond the schema's technical description. This elevates the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch a live Solana DEX divergence trading signal from Soliris Arc.' It specifies the verb ('fetch'), resource ('trading signal'), source ('Soliris Arc'), and scope ('real-time arbitrage spreads across Raydium, Orca, Jupiter, and Meteora'), distinguishing it from unrelated sibling tools like text processing or data conversion utilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use demo=true to get a sample signal without payment. For live signals the API returns a 402 with payment details.' It clearly distinguishes between demo and live modes, including payment requirements and error handling, offering complete when-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hash_textAInspect

Use this tool to generate a cryptographic hash of any text or data string. Triggers: 'hash this string', 'get the SHA256 of this', 'create a checksum', 'fingerprint this content', 'verify the integrity'. Supports MD5, SHA-1, SHA-256, SHA-512. Returns hex-encoded hash and the algorithm used. Use SHA-256 or SHA-512 for security-sensitive applications.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text or data to hash.
encodingNoOutput encoding: 'hex' (default, lowercase hexadecimal) or 'base64'.
algorithmNoHash algorithm (default: sha256). Use sha256 or sha512 for security. MD5/SHA-1 are fast but cryptographically weak.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses output format ('Returns hex-encoded hash and the algorithm used') and critical security characteristics (MD5/SHA-1 are 'cryptographically weak'). Could explicitly state idempotency or lack of side effects, but covers the essential behavioral traits for a stateless utility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose first, followed by triggers, capabilities, output, and security guidance. Trigger list is long but valuable for LLM selection. No redundant sentences, though 'Returns hex-encoded hash' slightly underrepresents the base64 encoding option available in schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple 3-parameter tool with 100% schema coverage. Compensates for missing output schema by describing return values ('hex-encoded hash and the algorithm used'). Security guidance completes the contextual picture for a cryptographic tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description reinforces algorithm selection guidance but largely repeats schema content. Adds no new syntax details or parameter relationships beyond what schema descriptions already provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb+resource ('generate a cryptographic hash of any text or data string') and lists supported algorithms. Distinct from all siblings (text processors, converters, etc.) which handle text manipulation rather than cryptography.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger phrases ('hash this string', 'create a checksum') and clear algorithm selection guidance ('Use SHA-256 or SHA-512 for security-sensitive applications'). Lacks explicit 'when not to use' or named sibling alternatives, but security guidance effectively constrains usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

html_to_markdownAInspect

Use this tool to convert raw HTML into clean, readable Markdown. Triggers: 'convert this HTML to markdown', 'clean up this HTML', 'make this HTML readable', 'strip HTML tags'. Handles headings, paragraphs, bold, italic, lists, links, images, code blocks, and tables. Returns clean Markdown and character count. Useful after web scraping or when processing HTML content for an LLM.

ParametersJSON Schema
NameRequiredDescriptionDefault
htmlYesThe HTML string to convert.
includeLinksNoWhether to preserve hyperlinks as [text](url) in the output (default: true).
includeImagesNoWhether to include image alt text as ![alt](src) (default: false — images often clutter output).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the output format ('Returns clean Markdown and character count') and transformation logic (specific elements handled, default behaviors for images vs links). Does not explicitly state safety/idempotency, but 'Returns' implies read-only transformation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five well-structured sentences progress logically from purpose → triggers → capabilities → output → use case. The trigger list is slightly verbose but serves as valid intent-matching guidance for an LLM. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter conversion tool with no output schema, the description adequately explains the return value ('clean Markdown and character count') and covers all behavioral aspects needed for invocation. Sufficiently complete given the tool's limited complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by specifying the Markdown syntax used for links ([text](url)) and images (![alt](src)), which exceeds the generic boolean descriptions in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb and resource ('convert raw HTML into clean, readable Markdown') and distinguishes from sibling tool markdown_to_html by stating the direction (HTML→Markdown). It also differentiates from simple tag stripping by emphasizing 'readable' Markdown formatting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit trigger phrases and contextual usage ('Useful after web scraping or when processing HTML content for an LLM'). However, it lacks explicit exclusion criteria or direct references to alternatives like markdown_to_html for reverse operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

json_to_csvAInspect

Use this tool when the user has JSON data (an array of objects) and wants it as a spreadsheet, CSV export, or downloadable table. Triggers: 'export this to CSV', 'convert this JSON to a spreadsheet', 'I need this as a table'. Infers column headers from object keys. Returns a properly escaped CSV string.

ParametersJSON Schema
NameRequiredDescriptionDefault
dataYesArray of objects to convert to CSV rows.
delimiterNoColumn delimiter character (default: ',').
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses key behavioral traits: column name inference, escaping logic for special characters/quotes/commas, and CRLF line ending format. Does not mention error handling for empty arrays or invalid inputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each earning its place: core function, column inference, escaping behavior, and output format. Front-loaded with the primary action, no redundant or wasteful text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion utility with 100% schema coverage and no output schema, the description is complete. It compensates for missing output schema by specifying return format details (CSV string with CRLF endings).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline. Description mentions 'JSON array of objects' which aligns with the 'data' parameter, but does not add semantic detail about the 'delimiter' parameter beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the conversion direction ('JSON array of objects into a CSV string') and implicitly distinguishes from sibling 'csv_to_csv' by specifying the input format. Specific verb and resource identified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about the conversion direction (JSON→CSV), but does not explicitly state when to use this versus sibling 'csv_to_json' or other format conversion tools. Usage is implied rather than guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_memoriesAInspect

Use this tool to discover what has been saved in memory — e.g. at the start of a session, or when the user asks 'what have you saved?' or 'show me my memories'. Returns all saved memory keys with their preview, save date, and expiry. Optionally filter by a prefix (e.g. 'project-' to list only project memories). Pair with recall_memory to fetch the full content of any key.

ParametersJSON Schema
NameRequiredDescriptionDefault
prefixNoOptional prefix to filter memory keys (e.g. 'project-'). If omitted, lists all memories.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses behavioral traits including: return structure (keys, previews, save date, expiry), data lifecycle (expiry exists), and read-only nature (implied by 'discover' and 'returns'). It could be improved by explicitly stating idempotency or safety, but covers the critical behavioral gaps given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with usage scenarios first, return values second, parameter usage third, and sibling relationship last. While slightly longer than two sentences, every clause earns its place by conveying distinct information (examples, return schema, filtering behavior, tool pairing).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by detailing the return structure (keys, previews, dates, expiry). It also addresses the ecosystem context by explaining how this tool relates to save_memory (implied by 'what has been saved') and recall_memory (explicit pairing instruction), providing complete contextual coverage for a 1-parameter discovery tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'prefix' parameter, which is well-documented in the schema itself. The description adds minimal semantic value beyond the schema, primarily reinforcing the example use case ('project memories') without adding syntax details, format constraints, or validation rules not already present in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'discover[s] what has been saved in memory' and clearly distinguishes it from sibling recall_memory by stating this tool returns previews/keys while recall_memory fetches 'full content'. It also specifies the resource (saved memories) and scope (all keys with preview, date, expiry).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios ('at the start of a session', 'when the user asks what have you saved') and explicitly names the alternative tool for a related use case ('Pair with recall_memory to fetch the full content'), clearly delineating when to use each sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

markdown_to_htmlAInspect

Use this tool when the user wants their content as an HTML file, a web page, or something they can publish/embed. Triggers: 'convert this to HTML', 'make this into a web page', 'export as HTML', 'I want an HTML version of this'. Converts markdown to a full, styled HTML document (headings, lists, code blocks, links). Returns the complete HTML string. Proactively offer this when you've written markdown content that the user may want to publish.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoOptional page title used in the <title> tag.
markdownYesThe markdown content to convert.
includeStylesNoInclude basic CSS styling for readability (default: true).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully indicates the output is a 'full HTML page' (distinguishing from fragments) and implies styling behavior via 'clean' and the includeStyles parameter reference. However, it omits details about return format (string vs file), error handling, or idempotency that would be helpful for a conversion utility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero waste: sentence 1 states purpose, sentence 2 lists capabilities, sentence 3 describes output characteristics, and sentence 4 provides usage context. Information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description compensates adequately by stating the tool 'Returns a full HTML page ready to save, share, or embed'. All three parameters are well-documented in the schema. For a stateless conversion utility, the description is sufficiently complete, though it could briefly mention error handling or output size limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by enumerating supported markdown syntax elements (headings, bold, code blocks, etc.), which elaborates on what the 'markdown' parameter can contain. It does not significantly expand on the 'title' or 'includeStyles' parameters beyond what the schema already clearly documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts markdown to HTML using specific verbs ('Convert') and resources ('markdown text', 'HTML document'). It lists supported markdown elements and characterizes the output as 'clean' and 'complete'. However, it lacks explicit differentiation from sibling tools like generate_pdf_from_text or create_shareable_paste that could also format content for sharing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive context with 'Great for turning Claude's markdown output into publishable web content', giving the agent a clear use case. However, it lacks explicit 'when not to use' guidance or named alternatives, leaving ambiguity about when to choose this over siblings like generate_pdf_from_text for document distribution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

merge_pdfsAInspect

Use this tool when the user provides two or more PDF files and wants them combined into one. Triggers: 'merge these PDFs', 'combine these documents', 'join these files into one PDF'. Accepts 2–20 base64-encoded PDFs in order. Returns the merged PDF as a base64 string.

ParametersJSON Schema
NameRequiredDescriptionDefault
filesYesArray of PDF files to merge, in order. Each item must have a 'base64' field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full disclosure burden. It successfully documents input encoding (base64), output format (base64 PNG), and cardinality limits (2-20). Could improve by mentioning error handling for invalid PDFs or memory constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: action/scope, input format, output format, and use cases. Information is front-loaded with the core operation. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately specifies return format. With 100% input schema coverage and no annotations, it covers the essential behavioral contract. Minor gap regarding error conditions or validation behavior prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, documenting the files array structure, base64 requirement, and optional filename. Description reinforces the base64 encoding but adds minimal semantic detail beyond the schema (baseline 3 for high coverage).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific action (Merge), clear resource (PDF files), and precise scope constraints (2 to 20 files into single document). Distinct from siblings like extract_pdf_text or generate_pdf_from_text through the merge/combine semantics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides concrete usage examples (combining reports, invoices, chapters) establishing ideal workflows. Lacks explicit 'when not to use' guidance or named alternatives, though the scope naturally differentiates it from text extraction or generation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ocr_imageAInspect

Use this tool when the user shares an image that contains text they need extracted, read, or processed. Triggers: 'read the text in this image', 'extract text from this screenshot', 'what does this scanned page say', 'transcribe this handwritten note'. Accepts base64-encoded PNG/JPEG/WEBP/BMP/TIFF. Returns extracted text, confidence score, and word count. Prefer this over vision model text extraction for accuracy on scanned docs.

ParametersJSON Schema
NameRequiredDescriptionDefault
filenameNoOptional filename with extension (e.g., 'scan.png') to help with format detection.
imageBase64YesImage file contents as a base64 string. Supported: PNG, JPEG, WEBP, BMP, TIFF.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return structure (text, confidence score, word count) which is critical without an output schema. Missing: error behavior (no text found), rate limits, or idempotency characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with core function, followed by input format, return values, and use cases. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage for a 2-parameter tool without annotations or output schema. Return value disclosure compensates for missing output schema. Minor gap: no mention of failure modes (e.g., unsupported image formats, unreadable text).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description reinforces base64 encoding requirement but adds no semantic details beyond schema (e.g., typical file size limits, when filename parameter is essential vs optional).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Extract') + resource ('text from an image') + method ('OCR'). Explicitly distinguishes from vision models and document extraction siblings via use case examples (scanned documents, screenshots, handwritten notes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use ('without a vision model') and applicable scenarios. Lacks explicit 'when not to use' versus siblings like extract_pdf_text if the PDF is image-based, but effectively differentiates from vision-based approaches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

private_execute_toolAInspect

Execute any Toolora privacy-sensitive tool with a MagicBlock Private Ephemeral Rollup payment proof. Use this when an agent or user needs to run a tool privately — no identity exposure, no input logging, payments untraceable on-chain. Each call costs 0.01 USDC paid via MagicBlock PER. PAYMENT FLOW: (1) POST https://payments.magicblock.app/v1/spl/transfer with {from, to: '59wUbJWMiBK737srMxPjtKFJDrcuh28Uezj9xjtMimQF', amount: 10000, cluster, mint} → get unsigned tx → sign with wallet → submit → get txSignature. Then call this tool with that signature. AVAILABLE TOOLS: word-counter (word/char stats), text-case (UPPER/lower/camel/snake), json-formatter (format+validate JSON), base64 (encode/decode), jwt-decoder (decode JWT claims), html-to-markdown, text-chunker (RAG prep), csv-to-json, url-encoder, regex-tester, hash-generator.

ParametersJSON Schema
NameRequiredDescriptionDefault
toolYesThe private tool to run.
inputYesThe text input to process privately. NOT logged server-side.
payerYesSolana public key (base58) of the wallet that signed the payment.
clusterNoSolana cluster (default: devnet).
txSignatureYesMagicBlock PER transaction signature proving a 0.01 USDC private payment to the Toolora vault. Use 'demo_test' for testing without a real payment.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, yet description comprehensively covers: cost (0.01 USDC), privacy guarantees (inputs NOT logged, untraceable payments), authentication mechanism (txSignature verification), testing mode (demo_test), and side effects. Rich behavioral disclosure for a financial/privacy-critical tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Contains necessary but verbose procedural documentation (full HTTP endpoint URL, JSON payload structure, 4-step payment flow). The AVAILABLE TOOLS list repeats enum values but adds descriptions. Information is front-loaded with purpose first, yet overall length exceeds ideal conciseness for an AI context window.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive on payment requirements and privacy contracts, but lacks output description (critical since no output schema exists). Does not specify what the tool returns (presumably the computation result of the sub-tool), which is necessary for an execution wrapper tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds substantial semantic value: explains txSignature includes 'demo_test' for testing, clarifies payer requires Solana base58 format, emphasizes input is not server-logged, and annotates each tool option with functionality (e.g., 'word/char stats', 'RAG prep').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Execute privacy-sensitive tools) and identifies the distinct resource (Toolora tools with MagicBlock PER). Clearly distinguishes from siblings by emphasizing privacy requirements ('no identity exposure, no input logging') and payment authentication that non-private alternatives lack.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Use this when an agent or user needs to run a tool privately'). Includes detailed payment flow instructions enabling correct invocation. Lacks explicit 'when not to use' guidance contrasting with non-private sibling tools, preventing a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_urlAInspect

Use this tool whenever a URL appears in the conversation and the user wants to read, summarise, quote from, or process the page content. Triggers: 'read this article', 'summarise this page', 'what does this link say', 'fetch this URL'. Uses Readability to return clean text, title, author, and excerpt. If the result is empty or incomplete, fall back to scrape_url_js for JS-rendered pages.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to fetch (must be http:// or https://).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the extraction method (Mozilla Readability), content filtering (strips navigation, ads, boilerplate), output format (plain text plus metadata), and scope constraint (public pages). Deducted one point for not addressing error handling or non-HTML content behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient clauses with zero waste: first establishes core function and return values, second explains implementation/behavioral traits, third provides usage context. Information is front-loaded and appropriately sized for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately documents return values (plain text, title, author, excerpt) and behavioral traits. For a simple, single-parameter read-only tool, this is sufficient, though explicit error condition documentation would achieve a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value beyond the schema, though it reinforces that the URL must be 'public' and fetchable, which slightly augments the schema's technical URL format requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Fetch', 'return') and resources ('public web page'), clarifies outputs ('plain text', 'title', 'author', 'excerpt'), and distinguishes from sibling scrape_url_js by emphasizing the Mozilla Readability extraction that strips navigation/ads versus full scraping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use cases ('reading articles, documentation') and implies the tool is for content consumption 'without a browser.' However, it does not explicitly name scrape_url_js as the alternative for full HTML/JS-heavy pages, relying on the agent to infer this from the Readability detail.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recall_memoryAInspect

Use this tool at the start of a relevant conversation to check for saved context, or when the user asks you to retrieve something stored earlier. Triggers: 'recall my project notes', 'what did we save last time?', 'look up my preferences', 'fetch the notes you stored'. Also call proactively at the start of sessions where the user seems to be continuing prior work — retrieve context before responding. Pass the same key used with save_memory. Returns stored content, save date, and expiry date.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesThe key you used when calling save_memory.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully explains the return structure ('stored content plus when it was saved and when it expires') and error conditions ('key doesn't exist or has expired'), providing crucial behavioral context beyond the operation name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose/relationship, parameter instruction, return value description, and error condition. Information is front-loaded and densely packed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter retrieval tool without output schema, the description is complete. It compensates for the missing output schema by detailing the return structure (content + timestamps) and documents failure modes (expired/missing keys).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage for the 'key' parameter, the description adds valuable semantic context by specifying to 'Pass the same key you used when saving', reinforcing the relationship between this tool and its sibling save_memory.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Retrieve') and resource ('text'), and immediately distinguishes this tool from siblings by referencing 'save_memory', clarifying this is the retrieval counterpart to the storage tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Establishes clear context by stating this retrieves content 'you previously saved with save_memory', implying the prerequisite relationship. However, it lacks explicit 'when not to use' guidance or mention of alternatives (though none appear to exist among siblings).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_regexAInspect

Use this tool to extract, test, or transform text using a regular expression. Triggers: 'extract all emails from this', 'find all URLs in this text', 'does this match a pattern?', 'replace all instances of X with Y', 'parse this log with regex'. Modes: 'matches' (all full matches), 'groups' (capture groups from all matches), 'test' (true/false), 'replace' (substitute matches). Returns results with match positions.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoOperation mode: 'matches' (default) returns all full matches, 'groups' returns named/numbered capture groups, 'test' returns true/false, 'replace' substitutes matches.
textYesThe text to search.
flagsNoRegex flags: 'g' (global), 'i' (case-insensitive), 'm' (multiline), 's' (dotAll). Combine freely, e.g. 'gi'. Default: 'g'.
patternYesThe regex pattern (without delimiters, e.g. '\\d{3}-\\d{4}').
replacementNoReplacement string when mode is 'replace'. Supports $1, $2 etc. for capture groups.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully explains the four operational modes and mentions return values include 'match positions'. However, it omits safety-critical regex behaviors like catastrophic backtracking risks, invalid pattern handling, or whether 'replace' mutates input or returns new text.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent information density with zero waste. Front-loaded with concrete trigger examples, followed by mode explanations and return value specification. Every sentence earns its place; structure makes it scannable for an agent deciding between tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description appropriately mentions return characteristics ('results with match positions'). It thoroughly covers the four modes of operation. Could be improved by specifying the output structure format (array, object, etc.) or error handling behavior for invalid regex patterns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds contextual value via trigger examples that illustrate real-world usage patterns for the text and pattern parameters, but does not add syntax details beyond what the schema already provides for flags or replacement templates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs (extract, test, transform) with clear resource (text via regular expression). The trigger examples ('extract all emails', 'find all URLs') effectively distinguish this from siblings like chunk_text or extract_structured_data by positioning it as the pattern-matching specialist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Triggers' section provides excellent positive guidance with specific user intents that should invoke this tool. However, it lacks explicit negative guidance (when NOT to use regex) or named alternatives for simple string operations that don't require pattern matching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_memoryAInspect

Use this tool to persist important information across sessions so it's available in future conversations. Triggers: 'remember this', 'save this for later', 'keep track of this', 'store my preferences', 'note this down'. Also use proactively when the user shares project specs, personal preferences, ongoing tasks, or any context they're likely to reference again — even without being asked. Give it a short descriptive key (e.g. 'project-spec', 'user-prefs', 'todo-list'). Saving to the same key overwrites it. Expires in 30 days by default.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesA short, memorable name for this memory (e.g. 'project-spec', 'todo-list'). Saving to the same key again overwrites it.
contentYesThe text to remember. Any format — prose, JSON, code, lists. Max 500KB.
expiresInDaysNoHow many days until this memory expires (default: 30, max: 90).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and succeeds in explaining key traits: global key namespace, upsert/overwrite behavior, and the 30-day expiration lifecycle. Minor gap: it omits the 500KB size limit mentioned in the schema, which is relevant behavioral context for a storage tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Front-loaded with core function, followed by metaphor/examples, ending with critical behavioral constraints (upsert, expiration). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter storage tool with no annotations and no output schema, the description comprehensively covers the persistence model, retrieval mechanism, key lifecycle (expiration/overwriting), and scope. Slight deduction for not mentioning failure modes or the 500KB constraint, but otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds valuable semantic context: concrete key examples ('project-spec', 'user-prefs'), the notepad analogy, and reinforces the expiration default. It translates technical parameters into user intent ('memorable name').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Save') and resource ('any text'), immediately clarifying the tool creates persistent storage. It distinguishes itself from processing siblings (like csv_to_json) by emphasizing recallability 'across sessions and conversations' and explicitly naming its retrieval sibling 'recall_memory'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly identifies when to use (when you need to 'recall it later' across sessions) and names the exact alternative tool for retrieval ('recall_memory'). The 'persistent notepad' metaphor and explanation of upsert behavior (overwrites) provide clear guidance on usage patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrape_url_jsAInspect

Use this tool when read_url returns empty, partial, or boilerplate content from a URL — it renders the page in a headless browser first, so JavaScript-heavy pages load correctly. Also use directly for SPAs (React, Next.js, Angular, Vue), product pages, news sites, or dashboards. Triggers: 'scrape this page', 'the page content isn't loading', 'get the content from this JS app'. Returns clean text or markdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to scrape (must be http:// or https://).
formatNoOutput format: 'text' for plain text (default), 'markdown' to preserve headings and links.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description discloses the headless browser implementation (critical for agent selection logic) and output formats ('clean text or markdown'). Minor gap: omits resource intensity, timeout behavior, or failure modes typical of headless browser operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: sentence 1 defines capability, sentence 2 differentiates from sibling, sentence 3 lists specific use cases. Information is front-loaded with the key differentiator (JavaScript rendering) in the opening clause.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no output schema, the description adequately covers the output behavior ('clean text or markdown') and explains the rendering mechanism. Lacks only operational details (timeouts, retry logic) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for both parameters (url and format). The description mentions 'Returns clean text or markdown' which aligns with the format parameter options, but does not add semantic detail beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool 'Fetch[es] and extract[s] content from a web page that requires JavaScript rendering' with specific resource (SPAs/dynamic content) and mechanism (headless browser). It clearly distinguishes from sibling 'read_url' in the second sentence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit comparison to sibling ('Unlike read_url which uses simple HTTP fetch...') and concrete when-to-use examples: 'React/Next.js/Angular/Vue apps, product pages, news sites behind JS paywalls, or any page where read_url returns empty content.' This clearly signals when to prefer this tool over the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transcribe_audioAInspect

Use this tool whenever the user shares an audio file and wants it transcribed to text. Triggers: 'transcribe this recording', 'convert this audio to text', 'what was said in this meeting', 'transcribe this voice note', 'turn this podcast into text'. Accepts base64-encoded audio (mp3, wav, m4a, ogg, flac, webm, mp4, etc.), max 25MB. Returns the full transcript, word count, and character count. Powered by OpenAI Whisper.

ParametersJSON Schema
NameRequiredDescriptionDefault
filenameYesFilename with extension (e.g., 'recording.mp3') — used for format detection.
audioBase64YesThe audio file contents encoded as a base64 string.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description fully compensates by disclosing: input encoding (base64), output structure (transcript plus word/character counts), constraints (25MB limit, 11 supported formats), and underlying model (gpt-4o-mini-transcribe).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense and well-structured: opens with core function, follows with input method, output details, technical constraints, and use cases. Every sentence provides essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully compensates for missing output schema by detailing return values (transcript, counts). Covers encoding requirements, file size limits, format support, and use cases. Complete for a 2-parameter audio processing tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing baseline 3. Description reinforces the base64 encoding requirement for the audio parameter and implies format detection via the filename, but does not add substantial semantic meaning beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (transcribe), resource (speech/audio), and technology (OpenAI Whisper). Clearly distinguishes from sibling document-processing tools (extract_pdf_text, ocr_image, etc.) by focusing on audio-to-text conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases ('meeting recordings, podcasts, voice notes, interviews') establishing when to use the tool. Lacks explicit 'when not to use' or alternative recommendations, though this is mitigated by the tool's unique function among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources