Skip to main content
Glama

Server Details

MCP server providing Pine Script v6 documentation.

Enables AI to:

Look up Pine Script functions and validate syntax
Access official documentation for indicators, strategies, and visuals
Understand Pine Script concepts (execution model, repainting, etc.)
Generate correct v6 code with proper function references
Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 10 of 10 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_doc retrieves whole files, get_section retrieves specific sections, list_docs and list_sections enumerate available content, search_docs and resolve_topic provide different search methods, get_functions lists functions, validate_function checks a single function, and get_prompt/list_prompts handle MCP prompts. No two tools are ambiguous.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., list_docs, get_section, resolve_topic). The verbs (get, list, search, validate, resolve) are descriptive and uniformly applied. No mixed conventions or irregular names.

Tool Count5/5

With 10 tools, the set is well-scoped for a documentation server. Each tool covers a specific access pattern (list, get, search, validate) without redundancy or excessive granularity. The count is within the ideal range for efficient agent navigation.

Completeness5/5

The tool surface fully covers the domain of reading Pine Script documentation: listing all docs and sections, retrieving whole files or specific sections, searching by full-text or exact term, validating function names, and listing/generating MCP prompts. No obvious gaps exist for the stated purpose.

Available Tools

10 tools
get_docA
Read-onlyIdempotent
Inspect

Read a specific Pine Script v6 documentation file.

For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) prefer list_sections() + get_section() to avoid loading 1000-2800 line files into context.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesRelative path to the documentation file (e.g., "reference/functions/ta.md")
limitNoMaximum characters to return. Use 30000 for large files to avoid token limits.
offsetNoCharacter offset to start reading from (default: 0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint and idempotentHint. The description adds context about potential large file sizes and suggests limits, which is useful beyond annotations, though it doesn't detail error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states core purpose, second provides critical usage guidance. No wasted words, front-loaded, and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description could explain return format, but the action 'Read' implies file content. All parameters are documented, and annotations cover safety. Minor gap on error handling, but sufficient for a simple read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by suggesting a specific limit value (30000) for large files, enhancing parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads a specific Pine Script v6 documentation file. It distinguishes from siblings like list_docs and get_section by specifying the action and resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises against using this tool for large files and recommends alternative tools (list_sections + get_section), providing clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_functionsA
Read-onlyIdempotent
Inspect

Get valid Pine Script v6 functions, optionally filtered by namespace.

Use before writing Pine Script to see which functions exist. For checking a single function name, use validate_function() instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
namespaceNoFilter by namespace (e.g., "ta", "strategy", "request"). Empty string returns all functions grouped by namespace.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and idempotentHint=true, indicating safe, idempotent operation. The description adds no further behavioral details beyond 'valid functions', so it does not significantly enhance transparency beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose and optional filter, second gives usage context and sibling alternative. No unnecessary words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional parameter and strong annotations, the description covers purpose, usage timing, and alternative. No output schema needed; return values are self-evident. Complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description merely restates the schema's info about namespace filtering and empty string behavior. No new parameter semantics are added beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets 'valid Pine Script v6 functions' with optional namespace filtering. It distinguishes from sibling 'validate_function' by specifying that for a single function name, one should use the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use before writing Pine Script to see which functions exist' and gives an alternative for single function name validation, providing clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_promptAInspect

Get a prompt by name with optional arguments.

Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe name of the prompt to get
argumentsNoOptional arguments for the prompt

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the burden is on the description. It explains the return format ('rendered prompt as JSON with a messages array') and how to provide arguments ('dict mapping argument names to values'). No mention of errors or permissions, but adequate for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the main purpose. Every sentence adds value without repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description does not need to detail return values. It covers purpose, parameters, and behavior sufficiently. Could briefly mention that it's for rendering prompts, but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, so baseline is 3. The description adds value by explaining arguments as a 'dict mapping argument names to values,' which is more specific than the schema's 'Optional arguments for the prompt'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a prompt by name with optional arguments,' which is a specific verb+resource combination. It distinguishes from sibling tools like list_prompts (which lists all) and get_functions (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage: use this tool to retrieve a specific prompt by name, while list_prompts lists all. However, it does not explicitly state when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sectionA
Read-onlyIdempotent
Inspect

Get a specific section from a documentation file by its header.

Use after list_sections() shows available headers, or after resolve_topic() / search_docs() identifies the relevant file.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesDocumentation file path (e.g., "reference/functions/strategy.md")
headerYesHeader text to find (e.g., "strategy.exit()" or "## strategy.exit()")
include_childrenNoInclude nested subsections under the header (default: True)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description doesn't need to repeat safety. It adds usage pattern context but no additional behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with front-loaded purpose and immediate usage guidance. No redundant words; every sentence serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with well-documented schema, the description covers purpose and usage. However, it does not mention what the function returns (e.g., section content), leaving a minor gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for each parameter. The description does not add semantic detail beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), resource ('section'), and method ('by its header'). It also distinguishes from siblings like get_doc (whole document) and list_sections (list only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit context on when to use this tool: after list_sections() or after resolve_topic()/search_docs(). It suggests alternatives but does not explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_docsA
Read-onlyIdempotent
Inspect

List all available Pine Script v6 documentation files with descriptions.

Returns files organised by category with descriptions. For small files use get_doc(path). For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) use list_sections(path) then get_section(path, header).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety. The description adds that files are returned 'organised by category with descriptions,' which provides additional behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a purpose: primary action, return format, and usage guidelines. It is front-loaded and concise with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and annotations indicating safe operation, the description is complete. It explains the return format (organised by category) and provides guidance on when to use sibling tools for content retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and schema coverage 100%, the baseline is 4. The description does not need to add parameter info since there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List all available Pine Script v6 documentation files with descriptions,' specifying a specific verb (list) and resource (documentation files). It distinguishes itself from siblings like get_doc and list_sections by providing guidance on when to use them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use alternatives: 'For small files use get_doc(path). For large files use list_sections(path) then get_section(path, header).' This gives clear context for usage vs siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_promptsAInspect

List all available prompts.

Returns JSON with prompt metadata including name, description, and optional arguments.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only says it returns JSON with metadata. Doesn't disclose any behavioral traits like rate limits, pagination, or safety, so it falls short.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, efficient and front-loaded. Could be slightly more concise but overall good.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema present (though not shown), the description explains return value. For a parameterless list tool, it is adequately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in the schema, so baseline score is 4. Description does not need to add parameter info since there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists all available prompts, which is specific and distinguishes it from sibling tools like get_prompt (single prompt) and list_docs (lists documents).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like get_prompt or search_docs. Agent lacks context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sectionsA
Read-onlyIdempotent
Inspect

List all section headers in a doc file. Use before get_section() to find the right header.

Especially useful for large files like ta.md, strategy.md, collections.md, drawing.md, general.md which have 50-115 sections each.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesDocumentation file path (e.g., "reference/functions/ta.md")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and idempotentHint, so the description doesn't need to repeat them. It adds context about large files but no further behavioral details. Adequate given annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the purpose, followed by usage guidance and examples. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one parameter and no output schema, the description covers purpose, usage, and relevant file examples. It does not describe the return format, but the tool name implies it returns section headers, so sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single 'path' parameter. The description does not add additional meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists section headers in a doc file, with a specific verb and resource. It also distinguishes itself from the sibling tool get_section by indicating its use before retrieving sections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use before get_section() to find the right header, and provides context about large files where it is especially useful. No explicit when-not-to-use, but the guidance is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_topicA
Read-onlyIdempotent
Inspect

Fast lookup for exact Pine Script API terms and known concepts.

Use for exact function names and Pine Script vocabulary (e.g., "ta.rsi", "strategy.entry", "repainting", "request.security").

For natural language questions, read the docs://manifest resource for routing guidance, then use get_doc() or list_sections() + get_section().

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesExact Pine Script term or known concept keyword.

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryYes
matchesYes
suggestionYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, signaling safe and repeatable operations. The description adds that it's a 'fast lookup', which is consistent with read-only behavior. No contradictions. It provides sufficient transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact: two sentences in the first paragraph stating purpose, followed by examples and routing guidance in the second. Every sentence adds value; no filler. Front-loads the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter) and the presence of an output schema, the description adequately covers context. It specifies the scope (exact terms), provides examples, and directs to complementary tools for broader queries. No gaps for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with a single parameter described as 'Exact Pine Script term or known concept keyword.' The description reinforces this usage. No additional semantics added beyond the schema, meeting the baseline expectation for full coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for 'Fast lookup for exact Pine Script API terms and known concepts' with specific examples (e.g., 'ta.rsi', 'repainting'). It distinguishes itself from other tools by specifying it's for exact terms, not natural language queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use for exact function names and Pine Script vocabulary' and provides detailed guidance on when not to use it: 'For natural language questions, read the docs... then use get_doc() or list_sections() + get_section().' This clearly outlines alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_docsA
Read-onlyIdempotent
Inspect

Search Pine Script v6 documentation and return matching sections.

Finds sections containing the query and returns previews with get_section() call hints so you can read the full content.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesExact string to search for (case-insensitive).
max_resultsNoMaximum sections to return (default: 5)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description adds that it returns previews with get_section call hints, which is useful behavioral context. No contradictions between description and annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the main purpose, and contains no unnecessary words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with two parameters, high schema coverage, and good annotations, the description is complete. It explains the tool's output and hints at the next step (get_section). Missing information about sorting or result details is minor.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already defines the parameters well. The description does not explicitly describe parameters but mentions the query in context. Baseline 3 is appropriate as the description adds little beyond the schema for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as searching Pine Script v6 documentation and returning matching sections. It specifies the verb 'search' and the resource 'documentation sections', and the mention of 'get_section() call hints' distinguishes it from sibling tools like get_section and list_sections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides the agent to use get_section for full content by mentioning 'call hints', but it does not explicitly state when to use this tool versus alternatives like list_docs or resolve_topic. There is no exclusion criteria or clear context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_functionA
Read-onlyIdempotent
Inspect

Check if a Pine Script v6 function name is valid.

ParametersJSON Schema
NameRequiredDescriptionDefault
fn_nameYesFunction name to validate (e.g., "ta.sma", "strategy.entry", "plot")

Output Schema

ParametersJSON Schema
NameRequiredDescription
typeYes
validYes
functionYes
suggestionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating no side effects. The description adds that it validates a function name, but does not describe the response format or error handling. Adequate but not enriched beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence that directly states the tool's function with no wasted words. It is front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple validation tool with clear annotations and an output schema, the description is nearly complete. The only slight gap is the lack of detail on return values, but the output schema likely covers this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides a clear description of the 'fn_name' parameter with examples, achieving 100% coverage. The tool description does not add additional meaning beyond what is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'check' and the resource 'Pine Script v6 function name validity'. It is distinct from sibling tools which deal with documentation and prompts, so no confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives. However, the purpose is straightforward and an agent can infer typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources