Toolora MCP Server
Server Details
12 free tools: PDF, OCR, QR codes, audio transcription, URL scraping, Excel, Word. No key needed.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 27 of 27 tools scored.
Most tools have distinct purposes, such as chunk_text for text splitting, count_words for text statistics, and extract_pdf_text for PDF extraction. However, some overlap exists between read_url and scrape_url_js (both for web content retrieval, though scrape_url_js handles JavaScript-heavy pages) and between csv_to_json and excel_to_json (both for tabular data conversion), which could cause minor confusion. Overall, descriptions help clarify differences, but a few tools have ambiguous boundaries.
Tool names follow a highly consistent snake_case verb_noun pattern throughout, such as chunk_text, count_words, create_shareable_paste, and extract_structured_data. All 27 tools adhere to this convention, with no deviations in style or structure, making them predictable and easy to parse for an agent.
With 27 tools, the count feels excessive for a general-purpose utility server, leading to potential cognitive overload and redundancy. While the tools cover diverse functions like text processing, file conversion, and web scraping, many could be consolidated (e.g., csv_to_json and excel_to_json) or omitted without losing core functionality. A more focused set of 10-15 tools would better suit the apparent scope.
The tool set provides comprehensive coverage for text and data processing, including extraction, conversion, analysis, and storage, with no major gaps in core workflows. However, minor gaps exist, such as lacking tools for direct editing of extracted data or advanced transformations beyond basic formats. The inclusion of memory management (save_memory, recall_memory) and niche tools like get_arc_trading_signal adds breadth but doesn't fully compensate for these omissions in the primary domain.
Available Tools
27 toolschunk_textAInspect
Use this tool to split long text into smaller, overlapping chunks suitable for embedding, vector storage, or RAG pipelines. Triggers: 'chunk this document for RAG', 'split this into embeddings', 'break this into segments', 'prepare this text for a vector database'. Returns an array of chunks with index, text, character count, and estimated token count. Essential before embedding or storing text in a vector database.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to chunk. | |
| overlap | No | Number of characters to overlap between consecutive chunks (default: 100). Helps preserve context across chunk boundaries. | |
| strategy | No | Chunking strategy: 'paragraph' (split on blank lines, default), 'sentence' (split on sentence boundaries), 'fixed' (fixed character count). | |
| chunkSize | No | Max characters per chunk for 'fixed' strategy, or target size hint for others (default: 1000). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it returns an array with specific fields (index, text, character count, token count), mentions overlapping chunks, and describes the preprocessing purpose. It doesn't mention performance characteristics or error conditions, but covers the essential output format and context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste. The first sentence establishes core functionality, trigger phrases provide usage context, and the final sentence reinforces importance. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does well by explaining the return format and preprocessing context. It could benefit from mentioning default behaviors or error cases, but covers the essential 'what it does' and 'when to use it' comprehensively given the structured data available.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description doesn't add parameter-specific information beyond what's in the schema, though it implies the tool handles 'long text' and mentions 'overlapping chunks' which aligns with the overlap parameter. No additional parameter semantics are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('split', 'chunk') and resource ('long text'), distinguishing it from siblings like count_words or estimate_tokens. It explicitly mentions use cases like embedding, vector storage, and RAG pipelines, which provides clear differentiation from other text-processing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool through trigger phrases ('chunk this document for RAG', 'split this into embeddings', etc.) and states it's 'essential before embedding or storing text in a vector database'. It clearly establishes the tool's role in preprocessing workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
count_wordsAInspect
Use this tool when the user wants statistics about a piece of text, or when you need to verify content length/readability before submitting. Triggers: 'how many words is this?', 'count the words', 'check the readability of this', 'is this too long?', 'what's the reading time?'. Returns word count, character count, sentence count, paragraph count, reading time, speaking time, Flesch readability score, and top keywords. Also use proactively when producing long-form content to report its length.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to analyse. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes the tool's output (word count, character count, readability scores, etc.) and its non-destructive, analytical nature. However, it doesn't mention potential limitations like text length constraints or performance considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with primary use cases and triggers. However, the second sentence is slightly verbose with multiple examples, and the final sentence could be integrated more smoothly. Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, no annotations, and no output schema, the description does a good job explaining what the tool returns and when to use it. It compensates for the lack of structured output documentation by listing return values. A minor gap is the absence of error handling or edge case information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'text' well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, such as format expectations or examples. The baseline score of 3 reflects adequate but minimal value addition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to provide statistics about text, including word count, character count, and readability metrics. It specifies the verb ('count', 'analyse') and resource ('text'), and distinguishes from siblings like 'chunk_text' or 'estimate_tokens' by focusing on comprehensive text analysis rather than transformation or token estimation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool, including user triggers (e.g., 'how many words is this?') and proactive use cases (e.g., 'when producing long-form content'). It implicitly distinguishes from siblings by not covering text manipulation or extraction, though it doesn't name specific alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
csv_to_jsonAInspect
Use this tool when the user pastes or provides CSV data and needs it as structured JSON, or wants to query/filter/analyse tabular data. Triggers: 'parse this CSV', 'convert this spreadsheet export to JSON', 'read this data file'. Returns a JSON array of objects with column headers as keys. Use this before analysing or transforming any CSV content.
| Name | Required | Description | Default |
|---|---|---|---|
| csv | Yes | The CSV data as a string. | |
| headers | No | Whether the first row contains column names (default: true). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the core functionality (conversion to JSON array) and mentions the default behavior for headers (implied in the return format). However, it lacks details on error handling, performance characteristics, or limitations (e.g., large CSV files, special characters).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences: the first states the purpose and triggers, the second specifies the return format, and the third provides usage context. Every sentence adds value without redundancy, and it's front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and return format adequately. However, it lacks details on error cases or performance limits, which would be helpful for robust agent handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters ('csv' and 'headers'). The description adds no additional parameter semantics beyond what's in the schema, such as CSV format specifics or header handling nuances. Baseline 3 is appropriate when the schema does all the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('parse', 'convert', 'read') and resources ('CSV data', 'structured JSON'), and distinguishes it from sibling tools like 'excel_to_json' and 'json_to_csv' by focusing exclusively on CSV-to-JSON conversion. It explicitly mentions the return format (JSON array of objects with column headers as keys).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool, including trigger phrases ('parse this CSV', 'convert this spreadsheet export to JSON', 'read this data file') and a specific use case ('before analysing or transforming any CSV content'). It clearly differentiates from siblings by focusing on CSV data rather than other formats like Excel, PDF, or HTML.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
diff_textsAInspect
Use this tool to compare two pieces of text and identify exactly what changed between them. Triggers: 'what changed between these two versions?', 'compare these texts', 'show me the diff', 'what's different?', 'find the changes in this revision'. Returns added lines (with +), removed lines (with -), unchanged lines, and summary statistics. Use this when reviewing edits, comparing document versions, or verifying AI-generated changes.
| Name | Required | Description | Default |
|---|---|---|---|
| text1 | Yes | The original text (before). | |
| text2 | Yes | The new text (after). | |
| context | No | Number of unchanged lines to show around each change for context (default: 3). Set to 0 for changes only. | |
| ignoreWhitespace | No | Ignore leading/trailing whitespace differences (default: false). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the output format ('Returns added lines (with +), removed lines (with -), unchanged lines, and summary statistics') and the tool's function, though it doesn't mention performance characteristics, rate limits, or error conditions that would be helpful for a diff operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage triggers, output details, and use cases. Every sentence adds value with no wasted words, making it efficient for an AI agent to parse and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a diff tool with no annotations and no output schema, the description provides good context about what the tool does and when to use it. However, it could benefit from more detail about the output format structure or edge cases, though the absence of an output schema makes this less critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing good documentation for all parameters. The description doesn't add significant parameter semantics beyond what's in the schema, though it implies the tool handles line-by-line comparison. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('compare', 'identify exactly what changed') and resources ('two pieces of text'), distinguishing it from siblings like chunk_text or count_words. It explicitly defines the scope as finding differences between text versions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with multiple trigger examples ('what changed between these two versions?', 'compare these texts') and specific use cases ('when reviewing edits, comparing document versions, or verifying AI-generated changes'). This clearly indicates when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_tokensAInspect
Use this tool to estimate the token count of a text before sending it to an LLM. Triggers: 'how many tokens is this?', 'will this fit in context?', 'check if this is within the limit', 'token count for GPT-4'. Returns estimated token count, percentage of the model's context window used, and estimated API cost. Essential for context window management and cost planning.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to estimate tokens for. | |
| model | No | Target model (default: gpt-4o). Used to calculate context window usage and cost estimate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it 'Returns estimated token count, percentage of the model's context window used, and estimated API cost'. However, it doesn't mention accuracy limitations, whether it's a precise count or estimation, or any rate limits/costs of using the tool itself.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly front-loaded with core purpose in first sentence, followed by usage triggers and return values. Every sentence adds value with zero waste. The structure flows logically from purpose to triggers to outputs to importance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does well by specifying what it returns (token count, percentage, cost estimate). However, it doesn't describe the format of the return values or provide examples, leaving some ambiguity about the output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema - it mentions 'Target model' for context window and cost calculations, but doesn't provide additional semantic context about parameter interactions or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific purpose: 'estimate the token count of a text before sending it to an LLM'. It distinguishes from siblings like count_words (word count) and chunk_text (text splitting) by focusing specifically on token estimation for LLM context management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when-to-use guidance with trigger phrases: 'how many tokens is this?', 'will this fit in context?', 'check if this is within the limit', 'token count for GPT-4'. Also states purpose: 'Essential for context window management and cost planning', clearly differentiating it from other text analysis tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
excel_to_jsonAInspect
Use this tool when the user shares an Excel or spreadsheet file and wants to read, analyse, query, or transform the data. Triggers: 'analyse this Excel file', 'read this spreadsheet', 'parse this .xlsx', 'what's in this workbook'. Accepts base64-encoded .xlsx, .xls, .ods, or .csv (filename required for format detection). Returns all sheets as JSON arrays of objects, with column headers as keys.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | Yes | Filename with extension (e.g., 'data.xlsx') — required for format detection. | |
| fileBase64 | Yes | The Excel or CSV file contents encoded as a base64 string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts base64-encoded files, requires filename for format detection, returns all sheets as JSON arrays with column headers as keys. It doesn't mention potential limitations like file size restrictions, processing time, or error conditions, but covers the core operational behavior well for a conversion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first covers when to use it with specific triggers, the second explains technical requirements and output format. Every element serves a purpose with zero wasted words. It's appropriately sized for a tool with two parameters and clear functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a file conversion tool with no annotations and no output schema, the description provides good coverage of what the tool does, when to use it, input requirements, and output format. It could be more complete by mentioning potential limitations or error cases, but given the straightforward nature of the tool and the clear schema documentation, it's mostly sufficient for an agent to understand and use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema - it mentions 'base64-encoded' and 'filename required for format detection' which are already in the schema descriptions. It doesn't provide additional semantic context about parameter usage or constraints beyond what's in the structured fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: converting Excel/spreadsheet files to JSON format. It specifies the exact action ('Returns all sheets as JSON arrays of objects') and distinguishes itself from siblings like csv_to_json by handling multiple file formats (.xlsx, .xls, .ods, .csv) and multiple sheets. The description goes beyond the name to explain the transformation process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers with concrete examples ('analyse this Excel file', 'read this spreadsheet', etc.) and specifies when to use it: when users share Excel/spreadsheet files for reading, analysis, querying, or transformation. It implicitly distinguishes from siblings like csv_to_json by mentioning additional formats and multi-sheet handling, though it doesn't explicitly name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_docx_textAInspect
Use this tool whenever the user shares a Word document (.docx) and wants to read, review, summarise, or analyse its content. Triggers: 'read this Word file', 'what does this doc say', 'summarise this document', 'extract text from this .docx'. Accepts base64-encoded .docx. Returns full text, paragraph count, word count, and character count. Works with Word, Google Docs exports, and LibreOffice files.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | Optional filename for format validation. | |
| fileBase64 | Yes | The .docx file contents encoded as a base64 string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (extracts text and provides counts), what it accepts (base64-encoded .docx), what it returns (full text, paragraph/word/character counts), and compatibility (Word, Google Docs exports, LibreOffice files). It doesn't mention error cases, performance, or rate limits, but covers core functionality well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the primary use case. Every sentence adds value: the first establishes purpose and triggers, the second covers input/output details, and the third adds compatibility information. It could be slightly more concise by combining some elements, but there's minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does well to cover what the tool does, when to use it, input requirements, and return values. It provides enough context for an agent to understand the tool's role among siblings and how to invoke it. The main gap is the lack of explicit error handling or edge case information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds context by mentioning 'base64-encoded .docx' which aligns with the schema's 'fileBase64' parameter, but doesn't provide additional syntax or format details beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('extract text', 'read', 'review', 'summarise', 'analyse') and resources (Word documents, .docx files). It distinguishes from siblings like 'extract_pdf_text' by specifying it handles .docx files specifically, not PDFs, and from 'chunk_text' or 'count_words' by being the entry point for document processing rather than text manipulation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('whenever the user shares a Word document... and wants to read, review, summarise, or analyse its content') and provides concrete trigger phrases. It also distinguishes from alternatives by specifying it works with .docx files, unlike 'extract_pdf_text' for PDFs or 'ocr_image' for images.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_pdf_textAInspect
Use this tool whenever the user shares, uploads, or references a PDF file and wants to read, summarise, search, or analyse its contents. Extracts all plain text from the PDF (base64-encoded). Returns text, page count, word count, and character count. Call this first before attempting any analysis of PDF content — e.g. 'summarise this PDF', 'what does this contract say', 'extract the data from this report'.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | Optional original filename, used for validation only. | |
| fileBase64 | Yes | The PDF file contents encoded as a base64 string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes the operation (extraction), output format (base64-encoded), and return values (text, page count, word count, character count). However, it doesn't mention potential limitations like handling of scanned PDFs, encrypted files, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences: the first establishes purpose and usage context, the second provides implementation guidance. Every sentence adds value without redundancy, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description does well by explaining the operation, when to use it, and what it returns. However, it could benefit from mentioning error conditions or limitations given the complexity of PDF processing. The guidance about calling it first for analysis is particularly valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('extracts all plain text from the PDF') and resource ('PDF file'), distinguishing it from siblings like extract_docx_text, ocr_image, and merge_pdfs. It explicitly mentions the output format ('base64-encoded') and what the tool returns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use it ('whenever the user shares, uploads, or references a PDF file and wants to read, summarise, search, or analyse its contents') and when to call it ('Call this first before attempting any analysis of PDF content'), with examples like 'summarise this PDF'. It distinguishes from analysis tools by positioning this as a prerequisite step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_structured_dataAInspect
Use this tool to extract structured JSON data from any unstructured text — emails, reports, web pages, PDFs, meeting notes, etc. Triggers: 'extract the data from this', 'pull the fields out of this text', 'parse this into structured format', 'get me a JSON from this', 'extract names/dates/amounts from this'. Describe the structure you want in plain English (e.g. 'extract: company name, CEO, founding year, revenue'). Returns valid JSON matching your description.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The unstructured text to extract data from. | |
| format | No | Output format: 'json' for a single object or array (default), 'jsonl' for one JSON object per line. | |
| schema | Yes | Plain-English description of what to extract, e.g. 'Extract: person name, email, phone number, company'. Or provide a JSON Schema as a string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that the tool 'Returns valid JSON matching your description', which adds some context about output behavior, but lacks details on error handling, performance limits, or authentication needs, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage examples and return information, all in a compact format with no wasted sentences, making it efficient and easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (extracting structured data from unstructured text) and no output schema, the description adequately covers purpose, usage, and return format. However, it lacks details on limitations or edge cases, which could enhance completeness for such a versatile tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents parameters well. The description adds minimal value by briefly mentioning 'Describe the structure you want in plain English', which aligns with the schema's 'schema' parameter but doesn't provide additional semantic details beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('extract') and resource ('structured JSON data from any unstructured text'), and distinguishes it from siblings by focusing on data extraction rather than text processing, conversion, or other operations listed in the sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit triggers (e.g., 'extract the data from this', 'pull the fields out of this text') and examples of when to use it, such as extracting from emails, reports, etc., and describes the structure needed in plain English, offering clear guidance on application scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_pdf_from_textAInspect
Use this tool when the user wants to save, export, or share your output as a PDF document. Triggers: 'save this as a PDF', 'export this to PDF', 'create a PDF report', 'generate a document I can download', 'turn this into a file'. Supports # headings, ## subheadings, - bullet lists, and plain paragraphs. Returns a base64-encoded PDF. Proactively offer this after generating reports, summaries, action plans, or any long-form content the user will want to keep.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Optional document title shown at the top of the PDF. | |
| author | No | Optional author name added to PDF metadata. | |
| content | Yes | The text content to render into a PDF. Supports # headings, ## subheadings, - bullet points, and plain paragraphs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes key behaviors: the tool returns base64-encoded PDFs (output format), supports specific markdown-like formatting, and is intended for saving/sharing content. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial behavioral context for a conversion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It front-loads the primary use case, provides specific triggers, details formatting support, specifies the return format, and gives proactive usage advice. While slightly longer than minimal, every sentence adds value and there's no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides substantial context: clear purpose, usage guidelines, formatting capabilities, and output format (base64-encoded PDF). It doesn't describe error conditions or advanced PDF features, but covers the essential information needed to use this conversion tool effectively. The 100% schema coverage helps compensate for what's not in the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description mentions that the content parameter 'supports # headings, ## subheadings, - bullet points, and plain paragraphs' - this adds some semantic context about formatting capabilities, but doesn't significantly enhance understanding beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to generate PDF documents from text content. It specifies the verb 'generate' and resource 'PDF', but doesn't explicitly differentiate from sibling tools like 'merge_pdfs' or 'extract_pdf_text' which also handle PDFs. The description mentions supporting specific formatting (# headings, ## subheadings, - bullet lists), which adds specificity beyond just 'create PDF'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent usage guidance with explicit triggers ('save this as a PDF', 'export this to PDF', etc.) and proactive usage recommendations ('offer this after generating reports, summaries, action plans'). It clearly indicates when to use this tool versus alternatives by specifying it's for converting text to PDF, not for merging existing PDFs or extracting text from PDFs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_qr_codeAInspect
Use this tool whenever the user asks for a QR code or wants a URL/text to be scannable. Triggers: 'make a QR code for this link', 'create a scannable code', 'generate a QR for my website'. Accepts any text or URL (max 2953 chars). Returns a base64-encoded PNG image. Display the image inline after generating it.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Image width/height in pixels (64–2048, default 400). | |
| text | Yes | The text or URL to encode in the QR code. Max 2953 characters. | |
| errorCorrectionLevel | No | Error correction level: L (7%), M (15%, default), Q (25%), H (30%). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts text/URL input with a character limit, returns a base64-encoded PNG image, and instructs to display the image inline. However, it doesn't mention potential limitations like rate limits, authentication needs, or error handling, which would be valuable for a complete behavioral picture.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and well-structured in three sentences: usage triggers, input specifications, and output handling. Every sentence earns its place by providing essential information without redundancy. It's front-loaded with the primary use case and maintains a logical flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description provides good contextual completeness. It covers the core functionality, usage triggers, input constraints, and output format. The main gap is the lack of output schema, but the description compensates by specifying the return format (base64-encoded PNG) and display instructions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some value by mentioning 'Accepts any text or URL (max 2953 chars)' which reinforces the 'text' parameter's purpose, but doesn't provide additional semantic context beyond what's already documented in the schema for 'size' and 'errorCorrectionLevel' parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('generate', 'create', 'make') and resource ('QR code'), and distinguishes it from siblings by focusing on QR code generation rather than text processing, data conversion, or other utilities. It explicitly names the function and provides concrete examples of user requests that should trigger its use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines by stating 'Use this tool whenever the user asks for a QR code or wants a URL/text to be scannable' and listing specific trigger phrases. It clearly defines when to use this tool versus alternatives by focusing on QR code generation, which is distinct from all sibling tools that handle text manipulation, data conversion, or other tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_arc_trading_signalAInspect
Fetch a live Solana DEX divergence trading signal from Soliris Arc — the agent-to-agent data market built on Arc (Circle's L1 blockchain). Each signal costs $0.001 USDC paid automatically on-chain via the x402 protocol. Signals identify real-time arbitrage spreads across Raydium, Orca, Jupiter, and Meteora. This is the agentic economy in action: your AI pays another AI for data, settled in under 1 second, no humans in the loop. Use demo=true to get a sample signal without payment. For live signals the API returns a 402 with payment details. Powered by Soliris (soliris.pro).
| Name | Required | Description | Default |
|---|---|---|---|
| demo | No | If true, returns a sample signal without requiring payment. Use this to explore the signal format before integrating payments. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: automatic on-chain payment via x402 protocol ($0.001 USDC per signal), settlement in under 1 second, and the 402 response for live signals. However, it doesn't mention error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It front-loads the core purpose, then explains payment mechanics and usage guidance. While slightly verbose with promotional language ('agentic economy in action'), every sentence contributes operational understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (on-chain payments, financial data) and absence of both annotations and output schema, the description provides good context about what the tool does and how to use it. However, it lacks details about the signal format, error conditions, and what specific data fields are returned, which would be important for integration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context about the 'demo' parameter beyond the schema's description. It explains that demo=true provides a sample signal without payment for exploration purposes, while implying that omitting demo (or demo=false) triggers live signal retrieval with payment. With 100% schema coverage and only one parameter, this provides good supplemental guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fetching live Solana DEX divergence trading signals from Soliris Arc. It specifies the data source, payment mechanism, and target DEXs (Raydium, Orca, Jupiter, Meteora), making it highly specific and distinct from all sibling tools which are general-purpose utilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: use demo=true for sample signals without payment, and explains that live signals return a 402 with payment details. It clearly distinguishes between demo and live modes, offering practical when-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hash_textAInspect
Use this tool to generate a cryptographic hash of any text or data string. Triggers: 'hash this string', 'get the SHA256 of this', 'create a checksum', 'fingerprint this content', 'verify the integrity'. Supports MD5, SHA-1, SHA-256, SHA-512. Returns hex-encoded hash and the algorithm used. Use SHA-256 or SHA-512 for security-sensitive applications.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text or data to hash. | |
| encoding | No | Output encoding: 'hex' (default, lowercase hexadecimal) or 'base64'. | |
| algorithm | No | Hash algorithm (default: sha256). Use sha256 or sha512 for security. MD5/SHA-1 are fast but cryptographically weak. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it generates cryptographic hashes, supports multiple algorithms (MD5, SHA-1, SHA-256, SHA-512), returns hex-encoded hash and algorithm used, and includes security guidance. It doesn't mention performance, rate limits, or error handling, but covers core functionality well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by triggers, supported algorithms, return values, and security guidance. Every sentence adds value with zero waste, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good completeness: it explains what the tool does, when to use it, algorithm options, return format, and security advice. It could be more complete by detailing error cases or exact output structure, but it's largely sufficient given the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema, only implying algorithm choices and security recommendations. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'generate a cryptographic hash of any text or data string.' It specifies the action (generate cryptographic hash) and resource (text/data string), distinguishing it from sibling tools like chunk_text or count_words that process text differently.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context with triggers like 'hash this string' and 'get the SHA256 of this,' and recommends SHA-256 or SHA-512 for security-sensitive applications. However, it doesn't explicitly state when NOT to use this tool versus alternatives (e.g., when integrity verification requires specific hash comparisons).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
html_to_markdownAInspect
Use this tool to convert raw HTML into clean, readable Markdown. Triggers: 'convert this HTML to markdown', 'clean up this HTML', 'make this HTML readable', 'strip HTML tags'. Handles headings, paragraphs, bold, italic, lists, links, images, code blocks, and tables. Returns clean Markdown and character count. Useful after web scraping or when processing HTML content for an LLM.
| Name | Required | Description | Default |
|---|---|---|---|
| html | Yes | The HTML string to convert. | |
| includeLinks | No | Whether to preserve hyperlinks as [text](url) in the output (default: true). | |
| includeImages | No | Whether to include image alt text as  (default: false — images often clutter output). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (converts HTML to Markdown), what elements it handles (headings, paragraphs, lists, etc.), and what it returns (clean Markdown and character count). It also implies the tool is non-destructive (conversion rather than modification) and doesn't require authentication. However, it doesn't mention potential limitations like malformed HTML handling or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted sentences. It opens with the core purpose, provides usage triggers, lists handled elements, specifies the return value, and ends with practical applications. Every sentence adds value and the information is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a conversion tool with no annotations and no output schema, the description provides good coverage of what the tool does, when to use it, and what to expect. It mentions the return format (Markdown and character count) which partially compensates for the missing output schema. However, without annotations or output schema, it could benefit from more detail about error conditions or transformation limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema - it mentions the tool handles links and images generally but doesn't connect this to the includeLinks/includeImages parameters. The baseline score of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('convert', 'clean up', 'strip') and resource ('raw HTML into clean, readable Markdown'). It distinguishes from siblings like markdown_to_html by specifying the conversion direction and from text extraction tools by focusing on HTML-to-Markdown transformation rather than content extraction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('after web scraping or when processing HTML content for an LLM') and includes trigger phrases that indicate appropriate scenarios. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different conversion needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
json_to_csvAInspect
Use this tool when the user has JSON data (an array of objects) and wants it as a spreadsheet, CSV export, or downloadable table. Triggers: 'export this to CSV', 'convert this JSON to a spreadsheet', 'I need this as a table'. Infers column headers from object keys. Returns a properly escaped CSV string.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Array of objects to convert to CSV rows. | |
| delimiter | No | Column delimiter character (default: ','). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: inferring column headers from object keys, returning a properly escaped CSV string, and handling array-of-objects input. However, it doesn't mention error handling for malformed JSON, empty arrays, or non-uniform object structures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: use case, triggers, and behavioral details. Every sentence adds value with zero waste. It's appropriately sized and front-loaded with the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a data transformation tool with no annotations and no output schema, the description does well by explaining the conversion process, input requirements, and output format. However, it could be more complete by mentioning potential limitations (e.g., nested objects, array uniformity) or the exact return format beyond 'CSV string'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (data array and delimiter). The description adds some context by mentioning 'array of objects' and implying CSV formatting, but doesn't provide additional parameter semantics beyond what the schema offers. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: converting JSON data (an array of objects) to CSV format. It specifies the exact transformation (JSON to CSV/spreadsheet/table) and distinguishes it from sibling tools like csv_to_json and excel_to_json by focusing on the opposite conversion direction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers with concrete examples ('export this to CSV', 'convert this JSON to a spreadsheet', 'I need this as a table'). It clearly indicates when to use this tool versus alternatives by specifying the input format (JSON array of objects) and desired output (CSV/spreadsheet/table).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_memoriesAInspect
Use this tool to discover what has been saved in memory — e.g. at the start of a session, or when the user asks 'what have you saved?' or 'show me my memories'. Returns all saved memory keys with their preview, save date, and expiry. Optionally filter by a prefix (e.g. 'project-' to list only project memories). Pair with recall_memory to fetch the full content of any key.
| Name | Required | Description | Default |
|---|---|---|---|
| prefix | No | Optional prefix to filter memory keys (e.g. 'project-'). If omitted, lists all memories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns ('all saved memory keys with their preview, save date, and expiry'), mentions the optional filtering capability, and explains the relationship with other memory tools. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise - every sentence earns its place. The first sentence establishes the primary use case with examples, the second describes the return format, the third explains the optional filtering, and the fourth provides integration guidance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with one optional parameter and no output schema, the description provides excellent context. It explains what the tool returns, when to use it, how to filter results, and how it integrates with other memory tools. The only minor gap is the lack of output format details, but given the tool's simplicity, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage and only one optional parameter, the description adds meaningful context beyond the schema. It explains the purpose of the prefix parameter ('filter by a prefix'), provides a concrete example ('project- to list only project memories'), and clarifies the default behavior ('If omitted, lists all memories'). This enhances understanding beyond the basic schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('discover what has been saved', 'list all saved memory keys') and distinguishes it from sibling tools like 'recall_memory' and 'save_memory'. It explicitly identifies the resource as 'memories' and provides concrete examples of when to use it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('at the start of a session', 'when the user asks what have you saved', 'show me my memories') and explicitly names the alternative tool to pair with ('recall_memory to fetch the full content'). It also distinguishes this listing function from the content retrieval function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markdown_to_htmlAInspect
Use this tool when the user wants their content as an HTML file, a web page, or something they can publish/embed. Triggers: 'convert this to HTML', 'make this into a web page', 'export as HTML', 'I want an HTML version of this'. Converts markdown to a full, styled HTML document (headings, lists, code blocks, links). Returns the complete HTML string. Proactively offer this when you've written markdown content that the user may want to publish.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Optional page title used in the <title> tag. | |
| markdown | Yes | The markdown content to convert. | |
| includeStyles | No | Include basic CSS styling for readability (default: true). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: conversion process, output format ('complete HTML string'), and styling options ('full, styled HTML document'). It could improve by mentioning error handling or performance characteristics, but it covers core functionality well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose, triggers, conversion details, and proactive guidance. It's slightly verbose but every sentence adds value. It could be more concise by combining some trigger examples, but overall it's efficient and front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a conversion tool with no annotations and no output schema, the description provides good context: purpose, usage scenarios, behavioral details, and output format. It doesn't explain return value structure in depth, but 'complete HTML string' gives adequate direction. Given the tool's moderate complexity, it's mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, but it implies the 'markdown' parameter's role through context. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Converts markdown to a full, styled HTML document' with specific elements like headings, lists, code blocks, and links. It distinguishes from siblings like 'html_to_markdown' by specifying the conversion direction and output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers ('convert this to HTML', 'make this into a web page', etc.) and proactive guidance ('Proactively offer this when you've written markdown content that the user may want to publish'). It clearly indicates when to use this tool based on user intent and content type.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
merge_pdfsAInspect
Use this tool when the user provides two or more PDF files and wants them combined into one. Triggers: 'merge these PDFs', 'combine these documents', 'join these files into one PDF'. Accepts 2–20 base64-encoded PDFs in order. Returns the merged PDF as a base64 string.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | Array of PDF files to merge, in order. Each item must have a 'base64' field. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by specifying input constraints (2-20 base64-encoded PDFs in order) and output format (merged PDF as base64 string). It doesn't mention error conditions, performance characteristics, or side effects, but covers core behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences: use case, triggers, and technical details. Every sentence adds value without redundancy. It's front-loaded with the primary purpose and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a PDF merging tool with no annotations and no output schema, the description provides good coverage of what the tool does, when to use it, input requirements, and output format. It could mention potential limitations (file size, page count) or error scenarios, but is largely complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'files' parameter thoroughly. The description adds minimal value beyond the schema by mentioning 'base64-encoded PDFs' and 'in order', but doesn't provide additional semantic context. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('merge', 'combine', 'join') and resource ('PDF files') with explicit scope ('two or more'). It distinguishes from siblings by focusing on PDF merging, unlike text extraction or format conversion tools in the list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers ('merge these PDFs', 'combine these documents', 'join these files into one PDF') and specifies when to use ('when the user provides two or more PDF files and wants them combined into one'). It clearly defines the use case without ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ocr_imageAInspect
Use this tool when the user shares an image that contains text they need extracted, read, or processed. Triggers: 'read the text in this image', 'extract text from this screenshot', 'what does this scanned page say', 'transcribe this handwritten note'. Accepts base64-encoded PNG/JPEG/WEBP/BMP/TIFF. Returns extracted text, confidence score, and word count. Prefer this over vision model text extraction for accuracy on scanned docs.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | Optional filename with extension (e.g., 'scan.png') to help with format detection. | |
| imageBase64 | Yes | Image file contents as a base64 string. Supported: PNG, JPEG, WEBP, BMP, TIFF. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts base64-encoded images in multiple formats (PNG/JPEG/WEBP/BMP/TIFF) and returns extracted text, confidence score, and word count. However, it lacks details on potential limitations like error handling or processing time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with three sentences that each add value: the first states the purpose and triggers, the second covers input formats, and the third details outputs and when to prefer it. There is no wasted text, and information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (image processing with text extraction), no annotations, and no output schema, the description does a good job covering inputs, outputs, and usage context. It could be more complete by detailing error cases or performance aspects, but it provides sufficient information for basic agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal parameter semantics beyond the schema, mentioning base64 encoding and supported formats, which are partially covered in the schema. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: extracting text from images containing text that needs to be read or processed. It specifies the exact function (extract text from images) and distinguishes it from vision model text extraction by noting its accuracy advantage for scanned documents, making it highly specific and differentiated from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it lists specific trigger phrases (e.g., 'read the text in this image'), specifies when to use it (for accuracy on scanned docs), and when to prefer it over alternatives (vision model text extraction). This gives clear context and exclusions for effective tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
private_execute_toolAInspect
Execute any Toolora privacy-sensitive tool with a MagicBlock Private Ephemeral Rollup payment proof. Use this when an agent or user needs to run a tool privately — no identity exposure, no input logging, payments untraceable on-chain. Each call costs 0.01 USDC paid via MagicBlock PER. PAYMENT FLOW: (1) POST https://payments.magicblock.app/v1/spl/transfer with {from, to: '3rXKwQ1kpjBd5tdcco32qsvqUh1BnZjcYnS5kYrP7AYE', amount: 10000, cluster, mint} → get unsigned tx → sign with wallet → submit → get txSignature. Then call this tool with that signature. AVAILABLE TOOLS: word-counter (word/char stats), text-case (UPPER/lower/camel/snake), json-formatter (format+validate JSON), base64 (encode/decode), jwt-decoder (decode JWT claims), html-to-markdown, text-chunker (RAG prep), csv-to-json, url-encoder, regex-tester, hash-generator.
| Name | Required | Description | Default |
|---|---|---|---|
| tool | Yes | The private tool to run. | |
| input | Yes | The text input to process privately. NOT logged server-side. | |
| payer | Yes | Solana public key (base58) of the wallet that signed the payment. | |
| cluster | No | Solana cluster (default: devnet). | |
| txSignature | Yes | MagicBlock PER transaction signature proving a 0.01 USDC private payment to the Toolora vault. Use 'demo_test' for testing without a real payment. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well. It discloses critical behavioral traits: privacy guarantees ('no identity exposure, no input logging'), payment requirements ('Each call costs 0.01 USDC'), and testing options ('Use "demo_test" for testing'). It doesn't mention rate limits or error handling, but covers the essential privacy and payment aspects thoroughly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but becomes verbose with detailed payment instructions and tool listings. The payment flow details (steps 1-4) could be more concise, and the tool listing duplicates what's in the schema enum. Some sentences don't earn their place in a tool description meant for AI agents.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (privacy system, payment integration, multiple sub-tools) and no annotations/output schema, the description does well. It explains the privacy context, payment mechanism, available tools, and testing option. However, it doesn't describe return values or error cases, which would be helpful since there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some value by listing available tools and explaining the payment flow context, but doesn't provide additional parameter semantics beyond what's in the schema descriptions. The 'input' parameter gets extra context ('NOT logged server-side'), but other parameters don't get meaningful elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Execute any Toolora privacy-sensitive tool with a MagicBlock Private Ephemeral Rollup payment proof.' It specifies the verb ('execute'), resource ('Toolora privacy-sensitive tool'), and key distinguishing feature (privacy/payment mechanism). It differentiates from siblings by emphasizing privacy features not present in other tools like 'count_words' or 'hash_text'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this when an agent or user needs to run a tool privately — no identity exposure, no input logging, payments untraceable on-chain.' It clearly states when to use this tool (for privacy-sensitive operations) versus when to use sibling tools (for non-private operations). It also details the payment prerequisite flow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_urlAInspect
Use this tool whenever a URL appears in the conversation and the user wants to read, summarise, quote from, or process the page content. Triggers: 'read this article', 'summarise this page', 'what does this link say', 'fetch this URL'. Uses Readability to return clean text, title, author, and excerpt. If the result is empty or incomplete, fall back to scrape_url_js for JS-rendered pages.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to fetch (must be http:// or https://). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well: it discloses the tool uses Readability for content extraction, returns specific fields (clean text, title, author, excerpt), and has fallback behavior to scrape_url_js. It doesn't mention rate limits, authentication needs, or error handling, but covers core behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: first states purpose and triggers, second explains processing method and output, third provides fallback guidance. Every sentence adds value with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides excellent context about behavior, usage scenarios, and fallback strategy. It could mention response format details or error cases, but covers the essential complexity well given the simple input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with only one parameter (url) fully documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it fetches and processes web page content using Readability to extract clean text, title, author, and excerpt. It specifies the exact resource (URL content) and distinguishes from sibling scrape_url_js by noting it's the primary tool with a fallback option.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool: when a URL appears and the user wants to read, summarize, quote, or process page content. It lists specific trigger phrases and provides a clear alternative (scrape_url_js) for JS-rendered pages when results are empty/incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recall_memoryAInspect
Use this tool at the start of a relevant conversation to check for saved context, or when the user asks you to retrieve something stored earlier. Triggers: 'recall my project notes', 'what did we save last time?', 'look up my preferences', 'fetch the notes you stored'. Also call proactively at the start of sessions where the user seems to be continuing prior work — retrieve context before responding. Pass the same key used with save_memory. Returns stored content, save date, and expiry date.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | The key you used when calling save_memory. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it retrieves stored content, save date, and expiry date, and mentions proactive usage. However, it lacks details on error handling, permissions, or rate limits, which would be helpful for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with usage scenarios and ending with return details. It uses clear, concise sentences without unnecessary fluff. However, it could be slightly more streamlined by combining some of the trigger examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple retrieval with one parameter), no annotations, and no output schema, the description is fairly complete. It covers purpose, usage, parameters, and return values. However, it could benefit from mentioning error cases or limitations, such as what happens if the key doesn't exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by explaining that the key should be 'the same key used with save_memory,' providing context beyond the schema's generic description. This clarifies the parameter's relationship to another tool, enhancing understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'check for saved context' and 'retrieve something stored earlier.' It specifies the verb ('retrieve') and resource ('saved context' or 'stored content'), and distinguishes it from sibling tools like 'save_memory' by focusing on retrieval rather than storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'at the start of a relevant conversation,' 'when the user asks you to retrieve something stored earlier,' and 'proactively at the start of sessions where the user seems to be continuing prior work.' It includes specific trigger phrases and references the sibling tool 'save_memory' for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_regexAInspect
Use this tool to extract, test, or transform text using a regular expression. Triggers: 'extract all emails from this', 'find all URLs in this text', 'does this match a pattern?', 'replace all instances of X with Y', 'parse this log with regex'. Modes: 'matches' (all full matches), 'groups' (capture groups from all matches), 'test' (true/false), 'replace' (substitute matches). Returns results with match positions.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Operation mode: 'matches' (default) returns all full matches, 'groups' returns named/numbered capture groups, 'test' returns true/false, 'replace' substitutes matches. | |
| text | Yes | The text to search. | |
| flags | No | Regex flags: 'g' (global), 'i' (case-insensitive), 'm' (multiline), 's' (dotAll). Combine freely, e.g. 'gi'. Default: 'g'. | |
| pattern | Yes | The regex pattern (without delimiters, e.g. '\\d{3}-\\d{4}'). | |
| replacement | No | Replacement string when mode is 'replace'. Supports $1, $2 etc. for capture groups. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the four operation modes and mentions that results include 'match positions,' which adds useful context beyond basic functionality. However, it doesn't address potential limitations like performance with large texts, error handling for invalid patterns, or whether operations are read-only/destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (purpose, triggers, modes, return information) in just three sentences. Every sentence adds value, though the triggers list could be slightly more concise. It's appropriately sized for a tool with multiple operation modes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a regex tool with 5 parameters, 100% schema coverage, and no output schema, the description provides good context. It explains the tool's purpose, usage scenarios, operation modes, and return characteristics. The main gap is lack of output format details, but given the schema's thoroughness and the description's mode explanations, it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all five parameters thoroughly. The description doesn't add significant parameter semantics beyond what's in the schema - it mentions modes but doesn't provide additional context about parameter interactions or usage nuances. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('extract, test, or transform text using a regular expression') and distinguishes it from sibling tools by focusing on regex operations. It provides concrete examples of triggers that illustrate its unique functionality compared to text processing siblings like chunk_text or count_words.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance through 'Triggers' examples that indicate when to use this tool (e.g., 'extract all emails from this', 'find all URLs in this text'). It also lists four distinct operation modes, helping users understand different use cases and alternatives within the same tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_memoryAInspect
Use this tool to persist important information across sessions so it's available in future conversations. Triggers: 'remember this', 'save this for later', 'keep track of this', 'store my preferences', 'note this down'. Also use proactively when the user shares project specs, personal preferences, ongoing tasks, or any context they're likely to reference again — even without being asked. Give it a short descriptive key (e.g. 'project-spec', 'user-prefs', 'todo-list'). Saving to the same key overwrites it. Expires in 30 days by default.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | A short, memorable name for this memory (e.g. 'project-spec', 'todo-list'). Saving to the same key again overwrites it. | |
| content | Yes | The text to remember. Any format — prose, JSON, code, lists. Max 500KB. | |
| expiresInDays | No | How many days until this memory expires (default: 30, max: 90). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains that saving to the same key overwrites previous content, mentions the 30-day default expiration with a 90-day max, and implies persistence across sessions. It doesn't cover error cases or auth needs, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Every sentence adds value: the first states the main function, the second provides usage triggers, the third gives proactive examples, and the fourth covers key behavioral details. Some redundancy exists between description and schema for the key parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well to cover the tool's behavior, usage context, and key constraints. It explains the mutation nature (overwrites), persistence scope, and expiration policy. It could mention error cases or response format, but provides sufficient context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, only briefly mentioning the key parameter's purpose and the expiration default. It doesn't provide additional semantic context about parameter interactions or usage patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('persist important information across sessions', 'save', 'overwrites') and distinguishes it from siblings like 'list_memories' and 'recall_memory' by focusing on storage rather than retrieval or listing. It explicitly mentions what resource it operates on (memories with keys).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool, including trigger phrases ('remember this', 'save this for later') and proactive scenarios ('when the user shares project specs, personal preferences'). It distinguishes from alternatives by not overlapping with retrieval/list tools, and implicitly suggests when not to use it (for temporary data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scrape_url_jsAInspect
Use this tool when read_url returns empty, partial, or boilerplate content from a URL — it renders the page in a headless browser first, so JavaScript-heavy pages load correctly. Also use directly for SPAs (React, Next.js, Angular, Vue), product pages, news sites, or dashboards. Triggers: 'scrape this page', 'the page content isn't loading', 'get the content from this JS app'. Returns clean text or markdown.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to scrape (must be http:// or https://). | |
| format | No | Output format: 'text' for plain text (default), 'markdown' to preserve headings and links. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it renders pages in a headless browser for JavaScript execution, handles specific page types, and returns clean text or markdown. However, it lacks details on rate limits, authentication needs, or error handling, which would be useful for a scraping tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the primary use case, followed by specific applications and trigger phrases, all in three efficient sentences. Each sentence adds value without redundancy, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does a good job covering the tool's purpose, usage, and behavior. It explains what the tool does, when to use it, and the output format. However, it could improve by detailing potential limitations (e.g., performance, timeouts) or error scenarios, which are relevant for a scraping tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('url' and 'format'). The description adds minimal value beyond the schema by mentioning output formats ('clean text or markdown'), but doesn't provide additional syntax or usage context. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to scrape URLs by rendering pages in a headless browser for JavaScript-heavy content. It specifies the verb ('scrape'), resource ('URL'), and distinguishes it from sibling 'read_url' by addressing when that tool fails. This is specific and differentiates from alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: when 'read_url' returns empty/partial/boilerplate content, for SPAs (React, Next.js, Angular, Vue), product pages, news sites, or dashboards. It includes trigger phrases like 'scrape this page' and 'the page content isn't loading', and names the alternative ('read_url'). This covers when-to-use, exclusions, and alternatives clearly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transcribe_audioAInspect
Use this tool whenever the user shares an audio file and wants it transcribed to text. Triggers: 'transcribe this recording', 'convert this audio to text', 'what was said in this meeting', 'transcribe this voice note', 'turn this podcast into text'. Accepts base64-encoded audio (mp3, wav, m4a, ogg, flac, webm, mp4, etc.), max 25MB. Returns the full transcript, word count, and character count. Powered by OpenAI Whisper.
| Name | Required | Description | Default |
|---|---|---|---|
| filename | Yes | Filename with extension (e.g., 'recording.mp3') — used for format detection. | |
| audioBase64 | Yes | The audio file contents encoded as a base64 string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts base64-encoded audio in multiple formats (mp3, wav, etc.), has a max file size (25MB), returns transcript with word and character counts, and is powered by OpenAI Whisper. It doesn't mention rate limits, authentication needs, or error handling, but covers most operational aspects well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the primary use case. It efficiently covers purpose, triggers, input requirements, and output in three sentences. The trigger examples are helpful but slightly verbose; however, each sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (audio processing with 2 parameters, no output schema, no annotations), the description is quite complete. It explains what the tool does, when to use it, input requirements (formats, encoding, size), and output details (transcript, counts, technology). It lacks error handling or performance details, but covers the essential context for an AI agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (audioBase64 and filename). The description adds context about audio formats and base64 encoding, but doesn't provide additional semantic meaning beyond what's in the schema descriptions. This meets the baseline of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: transcribing audio files to text. It specifies the action ('transcribe', 'convert'), the resource ('audio file'), and distinguishes it from sibling tools like ocr_image (for images) or extract_pdf_text (for documents). The description explicitly mentions it's for audio transcription, making its purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Use this tool whenever the user shares an audio file and wants it transcribed to text.' It lists specific trigger phrases (e.g., 'transcribe this recording', 'convert this audio to text'), clearly indicating when to use it. No alternatives are mentioned among siblings, but the triggers cover common user intents comprehensively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!