Prop Firm Deal Finder
Server Quality Checklist
- Disambiguation4/5
Most tools have distinct purposes (compare, search, specific details, discount code retrieval), but pfdf_find_cheapest and pfdf_get_deals overlap significantly—both return deals sorted by discount percentage, which could confuse agents about whether to optimize for 'cheapest' specifically or browse general deals.
Naming Consistency5/5Excellent consistency throughout: all tools use the 'pfdf_' prefix followed by clear verb_noun patterns (compare_firms, find_cheapest, get_deals, search_firms) in snake_case, making the set predictable and scannable.
Tool Count5/5Six tools is ideal for this specialized domain—covering comparison, search, specific lookups, and deal discovery without bloat. Each tool earns its place by targeting a distinct user intent (browsing vs. comparing vs. code retrieval).
Completeness4/5Strong coverage for a deal finder with comparison, search, and filtering capabilities. Minor gap: no dedicated filtering by quantitative thresholds (e.g., 'firms with profit split > 80%'), though search_firms may handle feature-based queries.
Average 4.3/5 across 6 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the output format ('markdown format'), confirms partial matching behavior, and lists the specific data fields included (profit split, challenge types, etc.). It does not contradict the readOnlyHint=true annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear Purpose/Args/Returns/Examples sections. It is front-loaded with the core function. The examples add legitimate value for an LLM agent, though the sentence 'Returns everything a trader needs...' is slightly redundant with the first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately compensates by specifying the return format ('Full firm profile in markdown format') and content scope. With only one simple parameter and no nested objects, the description provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The Args section in the description essentially restates the schema description (partial matching supported, examples) without adding significant semantic nuance like validation rules or normalization behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('prop firm details'), then enumerates exactly what data is returned (discount code, profit split, drawdown rules, etc.). This distinguishes it from siblings like pfdf_search_firms (which likely returns lists) or pfdf_get_discount_code (which returns just one field).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The examples imply usage patterns ('Tell me about FTMO'), but there is no explicit guidance on when to use this versus pfdf_search_firms (for discovery) or pfdf_compare_firms (for comparison). The phrase 'complete details' hints at scope but doesn't explicitly state selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish the read-only, idempotent safety profile. The description adds valuable behavioral context: it explicitly states results are sorted by highest discount, reveals the tool automatically applies code 'PFDF' to calculate prices, and describes the return structure ('Ranked list... with discount percentage, code, and links').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear sections (purpose, Args, Returns, Examples). Front-loaded with the core function in the first sentence. Every element serves the agent: the examples specifically demonstrate query-to-parameter translation patterns. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with complete schema coverage and rich annotations, the description adequately compensates for the missing output schema by detailing the return format ('Ranked list of cheapest firms with discount percentage, code, and links'). Minor gap: does not specify behavior when no firms match the criteria.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description organizes parameters into a clean 'Args' section and synthesizes constraints (listing enum values for category, range 1-20 for top_n) from the schema, but largely mirrors information already present in the structured schema rather than adding substantial semantic interpretation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool finds 'the cheapest prop firm challenges based on discount percentage' and distinguishes itself from siblings by specifying it sorts by 'highest discount available (using code PFDF)'—clearly differentiating it from generic search or comparison tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides three concrete examples mapping natural language queries ('What's the cheapest prop firm?', 'Top 10 cheapest prop firms') to specific parameter configurations, giving clear contextual guidance on when to invoke the tool. Lacks explicit contrast with sibling tools (e.g., 'use this instead of search_firms when looking for price ranking').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare the operation as read-only and idempotent. The description adds valuable behavioral context not in annotations: it specifies the return format ('Markdown-formatted list') and content ('full details and discount codes'), and clarifies the search scope includes 'slugs' and 'highlight features'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear section headers (Args, Returns, Examples). Information is front-loaded with the core purpose in the first sentence. No redundancy—every line adds specific value regarding parameters, return format, or usage patterns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter search tool, the description is complete. It compensates for the missing output schema by explicitly describing the markdown return format and included fields (discount codes), and annotations adequately cover the safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description elevates this by providing concrete examples showing how to map natural user intents ('Futures prop firms') to the query parameter value ('futures'), adding interpretive guidance beyond the schema's technical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search'), resource ('proprietary trading firms'), and searchable dimensions ('name, category, asset class, or feature'). It effectively implies this is a discovery tool, though it doesn't explicitly differentiate from siblings like `pfdf_get_firm_details` or `pfdf_compare_firms`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While it lacks explicit 'when-not-to-use' statements, the three natural language examples ('Tell me about FTMO', 'Prop firms that support crypto') provide clear contextual guidance for when to invoke this tool versus more specific retrieval or comparison tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly/idempotent), so description appropriately focuses on business logic: it discloses the universal 'PFDF' fallback code when firm_name is omitted, explains return contents ('discount code, estimated savings, and how to apply it'), and notes the '20+ partner firms' scope. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured sections (Args, Returns, Examples) that front-load the action and trigger conditions. The Examples section is particularly high-value for agent reasoning. Slightly redundant between Args description and schema, but overall efficient for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool, the description thoroughly covers the omission behavior (universal code), return value semantics, and practical usage patterns. No output schema exists, but the Returns section adequately describes the payload structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete firm_name documentation. Description adds value through the Examples section showing natural language mappings ('FTMO discount code' → params), which helps the LLM route user queries correctly even though schema is self-sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Get the discount code for a prop firm challenge or evaluation'—a specific verb-resource combination. It clearly distinguishes from siblings by targeting 'discount code,' 'promo code,' and 'coupon code' queries specifically, contrasting with comparison or search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'This is the tool to use when someone asks "what's the discount code for [firm]?" or "prop firm promo code"' with concrete examples mapping queries to parameters. Lacks explicit 'when not to use' language naming siblings, but the specificity of trigger phrases provides clear selection guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only, non-destructive, and idempotent traits. The description adds valuable behavioral context not in annotations: the data source scope ('database of 20+ firms'), output format ('Markdown comparison table'), and matching behavior ('partial matching supported'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear sections (main description, Args, Returns, Examples). Information is front-loaded with the core value proposition. Every sentence serves a purpose—no filler content. The examples section particularly aids comprehension without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with simple array input and 100% schema coverage, the description is comprehensive. It compensates for the lack of output schema by describing the return format (markdown table), provides usage examples, and documents constraints (2-10 firms). No significant gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds concrete usage examples (e.g., 'FTMO vs TradeDay') that illustrate the expected firm name formats and reinforces the partial matching capability, providing practical context beyond the schema's technical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 'two or more prop firms side-by-side' and enumerates specific comparison dimensions (discount codes, profit splits, drawdown rules, etc.). The 'side-by-side' and 'comparison table' language effectively distinguishes this from sibling tools like get_firm_details (single firm lookup) and search_firms (discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context ('Perfect for helping traders decide between firms') and concrete examples showing comparison scenarios. However, it lacks explicit 'when not to use' guidance or direct contrast with siblings (e.g., not stating 'use get_firm_details instead for single firm information').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable behavioral context beyond annotations: sorting logic (highest discount first), return format (markdown-formatted list), scope (active deals only), and universal code constraint (PFDF). Minor gap: doesn't mention data freshness or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent information architecture: purpose → return value → key constraint (PFDF code) → usage guidance → parameter reference → return format specification → examples. Every sentence earns its place; front-loaded with critical selector information before implementation details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter query tool with no output schema, description thoroughly compensates by specifying return format (markdown list with firm name, discount percentage, code, category, profit split, links), sorting behavior, and domain scope (20+ firms). Complete coverage given tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds an 'Args' section presenting parameters in accessible format, but more importantly provides four usage examples that demonstrate semantic meaning (e.g., 'Best futures prop firm discounts' → category: 'futures'), adding practical context beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Get' and resource 'prop firm discount deals'. Explicitly distinguishes from siblings by declaring 'This is THE tool to use when anyone asks about prop firm discounts, deals, coupons, promo codes, or savings', clearly carving out its domain versus comparison, search, or detail-oriented sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('This is THE tool to use when anyone asks about...') covering multiple intent variations (discounts, coupons, savings). Includes four concrete examples mapping natural language queries to specific parameter configurations, effectively demonstrating when-not to use alternatives like pfdf_find_cheapest or pfdf_search_firms.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/chrisbusbin-pixel/propfirmdealfinder-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server