Pine Script
Server Details
MCP server providing Pine Script v6 documentation.
Enables AI to:
Look up Pine Script functions and validate syntax
Access official documentation for indicators, strategies, and visuals
Understand Pine Script concepts (execution model, repainting, etc.)
Generate correct v6 code with proper function references- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 10 of 10 tools scored. Lowest: 3.5/5.
Each tool has a clearly distinct purpose: get_doc retrieves whole files, get_section retrieves specific sections, list_docs and list_sections enumerate available content, search_docs and resolve_topic provide different search methods, get_functions lists functions, validate_function checks a single function, and get_prompt/list_prompts handle MCP prompts. No two tools are ambiguous.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., list_docs, get_section, resolve_topic). The verbs (get, list, search, validate, resolve) are descriptive and uniformly applied. No mixed conventions or irregular names.
With 10 tools, the set is well-scoped for a documentation server. Each tool covers a specific access pattern (list, get, search, validate) without redundancy or excessive granularity. The count is within the ideal range for efficient agent navigation.
The tool surface fully covers the domain of reading Pine Script documentation: listing all docs and sections, retrieving whole files or specific sections, searching by full-text or exact term, validating function names, and listing/generating MCP prompts. No obvious gaps exist for the stated purpose.
Available Tools
10 toolsget_docARead-onlyIdempotentInspect
Read a specific Pine Script v6 documentation file.
For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) prefer list_sections() + get_section() to avoid loading 1000-2800 line files into context.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Relative path to the documentation file (e.g., "reference/functions/ta.md") | |
| limit | No | Maximum characters to return. Use 30000 for large files to avoid token limits. | |
| offset | No | Character offset to start reading from (default: 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and idempotentHint. The description adds context about potential large file sizes and suggests limits, which is useful beyond annotations, though it doesn't detail error behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states core purpose, second provides critical usage guidance. No wasted words, front-loaded, and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description could explain return format, but the action 'Read' implies file content. All parameters are documented, and annotations cover safety. Minor gap on error handling, but sufficient for a simple read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by suggesting a specific limit value (30000) for large files, enhancing parameter understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads a specific Pine Script v6 documentation file. It distinguishes from siblings like list_docs and get_section by specifying the action and resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises against using this tool for large files and recommends alternative tools (list_sections + get_section), providing clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_functionsARead-onlyIdempotentInspect
Get valid Pine Script v6 functions, optionally filtered by namespace.
Use before writing Pine Script to see which functions exist. For checking a single function name, use validate_function() instead.
| Name | Required | Description | Default |
|---|---|---|---|
| namespace | No | Filter by namespace (e.g., "ta", "strategy", "request"). Empty string returns all functions grouped by namespace. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true and idempotentHint=true, indicating safe, idempotent operation. The description adds no further behavioral details beyond 'valid functions', so it does not significantly enhance transparency beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose and optional filter, second gives usage context and sibling alternative. No unnecessary words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and strong annotations, the description covers purpose, usage timing, and alternative. No output schema needed; return values are self-evident. Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description merely restates the schema's info about namespace filtering and empty string behavior. No new parameter semantics are added beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets 'valid Pine Script v6 functions' with optional namespace filtering. It distinguishes from sibling 'validate_function' by specifying that for a single function name, one should use the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use before writing Pine Script to see which functions exist' and gives an alternative for single function name validation, providing clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_promptAInspect
Get a prompt by name with optional arguments.
Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the prompt to get | |
| arguments | No | Optional arguments for the prompt |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the burden is on the description. It explains the return format ('rendered prompt as JSON with a messages array') and how to provide arguments ('dict mapping argument names to values'). No mention of errors or permissions, but adequate for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the main purpose. Every sentence adds value without repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description does not need to detail return values. It covers purpose, parameters, and behavior sufficiently. Could briefly mention that it's for rendering prompts, but not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, so baseline is 3. The description adds value by explaining arguments as a 'dict mapping argument names to values,' which is more specific than the schema's 'Optional arguments for the prompt'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get a prompt by name with optional arguments,' which is a specific verb+resource combination. It distinguishes from sibling tools like list_prompts (which lists all) and get_functions (different resource).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage: use this tool to retrieve a specific prompt by name, while list_prompts lists all. However, it does not explicitly state when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sectionARead-onlyIdempotentInspect
Get a specific section from a documentation file by its header.
Use after list_sections() shows available headers, or after resolve_topic() / search_docs() identifies the relevant file.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Documentation file path (e.g., "reference/functions/strategy.md") | |
| header | Yes | Header text to find (e.g., "strategy.exit()" or "## strategy.exit()") | |
| include_children | No | Include nested subsections under the header (default: True) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the description doesn't need to repeat safety. It adds usage pattern context but no additional behavioral traits beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with front-loaded purpose and immediate usage guidance. No redundant words; every sentence serves a clear function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with well-documented schema, the description covers purpose and usage. However, it does not mention what the function returns (e.g., section content), leaving a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions for each parameter. The description does not add semantic detail beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('section'), and method ('by its header'). It also distinguishes from siblings like get_doc (whole document) and list_sections (list only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context on when to use this tool: after list_sections() or after resolve_topic()/search_docs(). It suggests alternatives but does not explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_docsARead-onlyIdempotentInspect
List all available Pine Script v6 documentation files with descriptions.
Returns files organised by category with descriptions. For small files use get_doc(path). For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) use list_sections(path) then get_section(path, header).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety. The description adds that files are returned 'organised by category with descriptions,' which provides additional behavioral context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each serving a purpose: primary action, return format, and usage guidelines. It is front-loaded and concise with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and annotations indicating safe operation, the description is complete. It explains the return format (organised by category) and provides guidance on when to use sibling tools for content retrieval.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and schema coverage 100%, the baseline is 4. The description does not need to add parameter info since there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all available Pine Script v6 documentation files with descriptions,' specifying a specific verb (list) and resource (documentation files). It distinguishes itself from siblings like get_doc and list_sections by providing guidance on when to use them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when to use alternatives: 'For small files use get_doc(path). For large files use list_sections(path) then get_section(path, header).' This gives clear context for usage vs siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_promptsAInspect
List all available prompts.
Returns JSON with prompt metadata including name, description, and optional arguments.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description only says it returns JSON with metadata. Doesn't disclose any behavioral traits like rate limits, pagination, or safety, so it falls short.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences, efficient and front-loaded. Could be slightly more concise but overall good.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present (though not shown), the description explains return value. For a parameterless list tool, it is adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in the schema, so baseline score is 4. Description does not need to add parameter info since there are none.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all available prompts, which is specific and distinguishes it from sibling tools like get_prompt (single prompt) and list_docs (lists documents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like get_prompt or search_docs. Agent lacks context for decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sectionsARead-onlyIdempotentInspect
List all section headers in a doc file. Use before get_section() to find the right header.
Especially useful for large files like ta.md, strategy.md, collections.md, drawing.md, general.md which have 50-115 sections each.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Documentation file path (e.g., "reference/functions/ta.md") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and idempotentHint, so the description doesn't need to repeat them. It adds context about large files but no further behavioral details. Adequate given annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the purpose, followed by usage guidance and examples. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one parameter and no output schema, the description covers purpose, usage, and relevant file examples. It does not describe the return format, but the tool name implies it returns section headers, so sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single 'path' parameter. The description does not add additional meaning beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists section headers in a doc file, with a specific verb and resource. It also distinguishes itself from the sibling tool get_section by indicating its use before retrieving sections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use before get_section() to find the right header, and provides context about large files where it is especially useful. No explicit when-not-to-use, but the guidance is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_topicARead-onlyIdempotentInspect
Fast lookup for exact Pine Script API terms and known concepts.
Use for exact function names and Pine Script vocabulary (e.g., "ta.rsi", "strategy.entry", "repainting", "request.security").
For natural language questions, read the docs://manifest resource for routing guidance, then use get_doc() or list_sections() + get_section().
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Exact Pine Script term or known concept keyword. |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| matches | Yes | |
| suggestion | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, signaling safe and repeatable operations. The description adds that it's a 'fast lookup', which is consistent with read-only behavior. No contradictions. It provides sufficient transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact: two sentences in the first paragraph stating purpose, followed by examples and routing guidance in the second. Every sentence adds value; no filler. Front-loads the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter) and the presence of an output schema, the description adequately covers context. It specifies the scope (exact terms), provides examples, and directs to complementary tools for broader queries. No gaps for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with a single parameter described as 'Exact Pine Script term or known concept keyword.' The description reinforces this usage. No additional semantics added beyond the schema, meeting the baseline expectation for full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for 'Fast lookup for exact Pine Script API terms and known concepts' with specific examples (e.g., 'ta.rsi', 'repainting'). It distinguishes itself from other tools by specifying it's for exact terms, not natural language queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use for exact function names and Pine Script vocabulary' and provides detailed guidance on when not to use it: 'For natural language questions, read the docs... then use get_doc() or list_sections() + get_section().' This clearly outlines alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_docsARead-onlyIdempotentInspect
Search Pine Script v6 documentation and return matching sections.
Finds sections containing the query and returns previews with get_section() call hints so you can read the full content.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Exact string to search for (case-insensitive). | |
| max_results | No | Maximum sections to return (default: 5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description adds that it returns previews with get_section call hints, which is useful behavioral context. No contradictions between description and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the main purpose, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with two parameters, high schema coverage, and good annotations, the description is complete. It explains the tool's output and hints at the next step (get_section). Missing information about sorting or result details is minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already defines the parameters well. The description does not explicitly describe parameters but mentions the query in context. Baseline 3 is appropriate as the description adds little beyond the schema for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as searching Pine Script v6 documentation and returning matching sections. It specifies the verb 'search' and the resource 'documentation sections', and the mention of 'get_section() call hints' distinguishes it from sibling tools like get_section and list_sections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly guides the agent to use get_section for full content by mentioning 'call hints', but it does not explicitly state when to use this tool versus alternatives like list_docs or resolve_topic. There is no exclusion criteria or clear context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_functionARead-onlyIdempotentInspect
Check if a Pine Script v6 function name is valid.
| Name | Required | Description | Default |
|---|---|---|---|
| fn_name | Yes | Function name to validate (e.g., "ta.sma", "strategy.entry", "plot") |
Output Schema
| Name | Required | Description |
|---|---|---|
| type | Yes | |
| valid | Yes | |
| function | Yes | |
| suggestion | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating no side effects. The description adds that it validates a function name, but does not describe the response format or error handling. Adequate but not enriched beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence that directly states the tool's function with no wasted words. It is front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple validation tool with clear annotations and an output schema, the description is nearly complete. The only slight gap is the lack of detail on return values, but the output schema likely covers this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides a clear description of the 'fn_name' parameter with examples, achieving 100% coverage. The tool description does not add additional meaning beyond what is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'check' and the resource 'Pine Script v6 function name validity'. It is distinct from sibling tools which deal with documentation and prompts, so no confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives. However, the purpose is straightforward and an agent can infer typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!