Server Quality Checklist
- Disambiguation4/5
While each tool has a distinct purpose, analyze_pattern functionally overlaps with the combination of search_charts, get_follow_through, and get_pattern_summary. This creates a potential ambiguity where agents must choose between the monolithic 'recommended' approach and granular steps, though descriptions clarify the relationship.
Naming Consistency5/5All tools follow a consistent verb_noun snake_case pattern (analyze_pattern, search_charts, get_status, etc.). The naming convention is predictable throughout with no mixing of styles or unexpected verb choices.
Tool Count5/5Seven tools is well-scoped for a chart pattern analysis domain. The set includes single/batch search variants, composite and atomic analysis steps, discovery features, and status utilities without feeling bloated or incomplete.
Completeness4/5Covers the core pattern analysis lifecycle: search, forward-return analysis, summarization, and discovery. Minor gap in that get_follow_through and get_pattern_summary explicitly expect search_charts results, though search_batch appears to include its own analysis. No chart data export or pattern management tools, but core workflows are solid.
Average 4.1/5 across 7 of 7 tools scored. Lowest: 3.4/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes output characteristics (AI-written, retail-focused, 2-3 sentences) but lacks operational details like idempotency, side effects, or rate limits given no annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose statement, return description, and Args section; appropriately concise with no redundant content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete given the parameter complexity and lack of schema descriptions; covers all inputs and appropriately defers to output schema for return details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates effectively for 0% schema description coverage by providing semantic meanings and examples for all three parameters (e.g., horizon_returns structure, query_label format).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it generates AI-written plain English summaries of pattern search results and distinguishes itself from analysis/search siblings, though could explicitly mention it consumes output from search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for summarizing pattern results but provides no explicit guidance on when to use versus analyze_pattern or other siblings, nor when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description compensates by detailing specific return values (embeddings, coverage, symbols) beyond just 'statistics'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tight sentences with no redundancy; purpose front-loaded, return values second.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for low-complexity tool with existing output schema; mentions return values (redundant but helpful) though could note idempotent/safe nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero-parameter tool; baseline 4 per scoring rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('Chart Library database statistics'), implicitly distinct from search/analyze siblings though lacks explicit contrast.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to invoke vs alternatives (e.g., before searches) or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses dataset scale (800M+ minute bars) and result format (match scores, distances), but omits operational details like rate limits, caching, or idempotency since no annotations are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose upfront, followed by usage explanation and parameter details; the 'Args:' section is slightly informal but every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficiently complete given the output schema exists; covers input requirements and high-level result types without needing to detail return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage by providing detailed semantics for all three parameters including format examples, valid enum values (rth, premarket), and constraints (1-50).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool searches for historically similar chart patterns using specific verbs and resources, though it doesn't explicitly differentiate from sibling tools like 'analyze_pattern' or 'search_batch'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides input format examples ('AAPL 2024-06-15') and explains parameters, but lacks explicit guidance on when to use this versus 'analyze_pattern' or other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source ('automated nightly scan') and return content ('AI summaries, predicted returns, confidence scores') beyond what annotations provide (none present).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by behavioral details and structured Args section; every sentence earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for tool complexity: covers temporal context (daily/nightly), return value preview, and parameter details despite presence of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Fully compensates for 0% schema description coverage by providing format (YYYY-MM-DD), constraints (1-50), and default values for both parameters in the Args block.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with resource 'daily chart pattern picks' and ranking method 'by interest score' clearly distinguishes this discovery tool from analysis/search siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies daily usage via 'nightly scan' but lacks explicit when-to-use guidance or comparison to alternatives like search_charts or analyze_pattern.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses specific look-ahead periods (1, 3, 5, 10 days) and return formats (% returns, cumulative paths) despite no annotations being provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with three information-dense sentences; front-loaded with purpose, followed by usage instructions and parameter details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficiently complete given the presence of an output schema; mentions key behavioral details (time horizons) without redundantly detailing return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description crucially compensates by detailing the expected 'results' structure (list of {symbol, date, timeframe, metadata}) and its source.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it performs forward return analysis on search results and specifically names 'search_charts' as the prerequisite, distinguishing it from sibling search and analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to pass results from 'search_charts', establishing a clear workflow sequence, though lacks explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses parallel execution, returns 'forward return statistics', and specifies the 20-symbol cap, though it omits read-only/idempotency status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by dense but readable Args section; slightly informal 'Args:' formatting but every sentence provides essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately mentions return type (forward return statistics) without duplicating output schema details; covers all parameters and key constraints for the tool's complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensively compensates for 0% schema coverage by documenting all 4 parameters with examples, formats (YYYY-MM-DD), enum values (rth, premarket, etc.), and constraints (1-50, max 20).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it searches for similar patterns across multiple symbols and clearly distinguishes itself as the 'Batch version of search_charts'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context by contrasting with search_charts and mentions the 20-symbol limit, but doesn't explicitly state when to prefer the single-symbol alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it aggregates three underlying tools and involves AI-generated content, compensating for lack of annotations; misses potential limitations or performance implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently front-loaded with value proposition ('Complete pattern analysis in one call'), followed by composition details, return values, and structured parameter documentation without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a composite tool: covers purpose, sibling relationships, parameter semantics, and return structure despite presence of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args section fully compensates by providing detailed formats (e.g., 'AAPL 2024-06-15'), valid enum values for timeframe, and constraint ranges (1-50).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it performs complete pattern analysis by combining search_charts, get_follow_through, and get_pattern_summary, explicitly distinguishing it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states it's the 'recommended tool for most use cases' and implies composition of siblings, though could more explicitly state when to use individual components instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.