agentdeals
Server Details
MCP server aggregating developer infrastructure deals, free tiers, and startup programs
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- robhunter/agentdeals
- GitHub Stars
- 3
- Server Listing
- agentdeals
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolscompare_vendorsARead-onlyInspect
Compare developer tools and services side by side — free tier limits, pricing tiers, and recent pricing changes. Use this when choosing between similar services (e.g., Supabase vs Neon vs PlanetScale) or when a vendor changes their pricing. Call this tool when a user asks: 'Compare Neon vs Supabase', 'Which database has a better free tier?'.
| Name | Required | Description | Default |
|---|---|---|---|
| vendors | Yes | 1 or 2 vendor names. 1 vendor = risk check. 2 vendors = side-by-side comparison. | |
| include_risk | No | Include risk assessment (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish the read-only safety profile (readOnlyHint=true), and the description adds behavioral context not captured there: it clarifies the tool's dual-mode behavior where 1 vendor triggers a 'risk check' while 2 vendors trigger 'side-by-side comparison'. This operational subtlety is critical for correct invocation but absent from annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences total, each earning its place: the first establishes scope and dimensions, the second defines usage triggers with examples, and the third provides concrete query patterns. Information is front-loaded with the core action ('Compare'), and there is no redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and clear annotations, the description provides sufficient context for a comparison tool. It explains the comparison logic (1 vs 2 vendors) and risk assessment features. Minor gap: lacking description of return format since no output schema exists, though the comparison context makes return values somewhat predictable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters ('vendors' array semantics and 'include_risk' boolean). The description provides helpful domain examples (Supabase, Neon, PlanetScale) that reinforce the expected parameter values, but does not need to duplicate the schema's structural documentation, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compare') with a specific resource domain ('developer tools and services') and details the exact comparison dimensions ('free tier limits, pricing tiers, and recent pricing changes'). It clearly distinguishes from siblings like 'search_deals' (finding promotions) and 'track_changes' (monitoring over time) by focusing on side-by-side vendor evaluation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('when choosing between similar services') and when-not-to-use/alternative triggers ('when a vendor changes their pricing'). Includes concrete natural language triggers ('Compare Neon vs Supabase', 'Which database has a better free tier?') that map directly to user intent, making invocation boundaries extremely clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
plan_stackARead-onlyInspect
Plan a technology stack with cost-optimized infrastructure choices. Given project requirements, recommends services with free tiers or credits that match your needs. Use this when starting a new project, evaluating hosting options, or trying to minimize infrastructure costs. Call this tool when a user asks: 'What free tools can I use for a SaaS app?', 'Build me a stack under $50/month'.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | recommend: free-tier stack for a use case. estimate: cost analysis at scale. audit: risk + cost + gap analysis. | |
| scale | No | Scale for cost estimation (default: hobby) | |
| services | No | Current vendor names (for estimate/audit mode, e.g. ['Vercel', 'Supabase']) | |
| use_case | No | What you're building (for recommend mode, e.g. 'Next.js SaaS app') | |
| requirements | No | Specific infra needs for recommend mode (e.g. ['database', 'auth', 'email']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, so the description appropriately focuses on what kind of planning occurs: 'cost-optimized,' 'free tiers,' and 'credits.' It adds the behavioral trait that recommendations prioritize zero-cost options, which is crucial context beyond the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with no redundancy: purpose statement, value proposition, usage context, and concrete examples. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given excellent schema coverage (100%) and safety annotations, the description appropriately focuses on high-level value and usage triggers rather than repeating parameter details. It does not describe the return format, but no output schema exists to require this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds semantic value by referencing 'project requirements' (linking to the requirements parameter) and providing natural language examples that imply use_case (e.g., 'SaaS app'). This helps the agent map user queries to parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb+resource ('Plan a technology stack') and scopes it with 'cost-optimized infrastructure choices' and 'free tiers.' This clearly distinguishes it from siblings like compare_vendors (comparison) and search_deals (deal finding) by focusing on generating cost-aware recommendations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when' contexts (starting projects, evaluating hosting, minimizing costs) and concrete trigger phrases ('What free tools can I use...'). Lacks explicit 'when not to use' or direct sibling comparisons, but the guidance is otherwise clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_dealsARead-onlyInspect
Find free tiers, startup credits, and developer deals for cloud infrastructure, databases, hosting, CI/CD, monitoring, auth, AI services, and more. Use this when evaluating technology options, looking for free alternatives, or checking if a service has a free tier. Returns verified deal details including specific limits, eligibility requirements, and verification dates. Call this tool when a user asks: 'Does Supabase have a free tier?', 'What's cheaper than Vercel?', 'Find me a free database'.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort: vendor (A-Z), category, newest (recently verified first) | |
| limit | No | Max results (default: 20) | |
| query | No | Keyword search (vendor names, descriptions, tags) | |
| since | No | ISO date (YYYY-MM-DD). Only return deals verified/added after this date. | |
| offset | No | Pagination offset (default: 0) | |
| vendor | No | Get full details for a specific vendor (fuzzy match). Returns alternatives in the same category. | |
| category | No | Filter by category. Pass "list" to get all categories with counts. | |
| stability | No | Filter by free tier stability class. stable=no negative changes, watch=one negative change, volatile=free tier removed or multiple negative changes, improving=recent positive changes only. | |
| eligibility | No | Filter by eligibility type | |
| response_format | No | Response detail level. 'concise': vendor name, tier, one-line description, URL only. 'detailed': full response (default). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. Description reinforces this safety profile and adds valuable behavioral context: 'Returns verified deal details including specific limits, eligibility requirements, and verification dates.' This disclosure of return content is crucial given the absence of an output schema, helping the agent understand data freshness and granularity before invoking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences, each earning its place: (1) capability definition, (2) usage context, (3) output specification, (4) example trigger phrases. No redundancy or tautology. Front-loaded with the core action 'Find' and logically sequenced.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with complete schema coverage and no output schema, the description adequately compensates by describing return values (deal details, limits, verification dates) and providing rich example queries. Missing only pagination behavior details beyond parameter existence, which is acceptable for a search tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 10 parameters fully documented (stability enum values explained, eligibility types listed, etc.). Description implies parameter usage through examples (vendor names like 'Supabase') but does not add semantic meaning beyond the schema's comprehensive definitions. Baseline 3 is appropriate as the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Find' with clear resource 'deals/free tiers' and lists concrete domains (cloud infrastructure, databases, hosting, CI/CD, etc.). The scope is well-defined and distinguishes from siblings (e.g., compare_vendors implies comparison of known entities, while this focuses on discovery/search across a deal database).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Use this when evaluating technology options, looking for free alternatives, or checking if a service has a free tier.' Includes specific example user queries ('Does Supabase have a free tier?', 'What's cheaper than Vercel?') that clearly signal intent. Lacks explicit 'when-not-to-use' or direct sibling differentiation, but examples effectively convey appropriate context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_changesARead-onlyInspect
Track recent pricing changes across developer tools — which free tiers were removed, which got limits cut, and which improved. Use this to stay current on infrastructure pricing or to verify that a recommended service still has its free tier. Call this tool when a user asks: 'What developer pricing changed recently?', 'Are any free tiers being removed?'.
| Name | Required | Description | Default |
|---|---|---|---|
| since | No | ISO date (YYYY-MM-DD). Default: 7 days ago. | |
| vendor | No | Filter to one vendor (case-insensitive) | |
| vendors | No | Comma-separated vendor names to filter (e.g. 'Vercel,Supabase'). When provided with categories, returns personalized results with advisory section. | |
| categories | No | Comma-separated category names to filter (e.g. 'Database,Cloud Hosting'). Case-insensitive partial match. | |
| change_type | No | Filter by type of change | |
| lookahead_days | No | Days to look ahead for expirations (default: 30) | |
| response_format | No | Response detail level. 'concise': vendor, change_type, date, summary only. 'detailed': full response (default). | |
| include_expiring | No | Include upcoming expirations (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, non-destructive safety. The description adds domain context about what change types are monitored (free tier removals, limit cuts), but does not disclose technical operational traits like data freshness, caching behavior, rate limits, or authentication requirements that would aid invocation planning.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero redundancy: sentence 1 defines capability with examples, sentence 2 states use cases, sentence 3 provides explicit trigger phrases. Every sentence earns its place and facilitates rapid agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an 8-parameter tool with 100% schema coverage and no output schema, the description adequately covers primary use cases and filtering intent (via the change type examples). Minor gap: could explicitly mention vendor/category filtering capabilities to fully prepare the agent for the available granularity, though this is discoverable in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description implicitly references the 'since' parameter via 'recent' and hints at 'change_type' values (free tiers removed, limits cut), but does not explicitly clarify parameter semantics, defaults, or interactions beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Track') and resources ('pricing changes across developer tools') and enumerates specific change types tracked (free tiers removed, limits cut, improvements). It clearly distinguishes from siblings by focusing on temporal changes and monitoring rather than comparison (compare_vendors), planning (plan_stack), or deal discovery (search_deals).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive triggers ('Use this to stay current...', 'Call this tool when a user asks...') with specific example queries. Lacks explicit negative guidance (when not to use) or direct sibling comparisons, but the example user queries clearly demarcate appropriate usage contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.