Skip to main content
Glama

Server Details

MCP server aggregating developer infrastructure deals, free tiers, and startup programs

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
robhunter/agentdeals
GitHub Stars
3
Server Listing
agentdeals

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
compare_vendorsA
Read-only
Inspect

Compare developer tools and services side by side — free tier limits, pricing tiers, and recent pricing changes. Use this when choosing between similar services (e.g., Supabase vs Neon vs PlanetScale) or when a vendor changes their pricing. Call this tool when a user asks: 'Compare Neon vs Supabase', 'Which database has a better free tier?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
vendorsYes1 or 2 vendor names. 1 vendor = risk check. 2 vendors = side-by-side comparison.
include_riskNoInclude risk assessment (default: true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish the read-only safety profile (readOnlyHint=true), and the description adds behavioral context not captured there: it clarifies the tool's dual-mode behavior where 1 vendor triggers a 'risk check' while 2 vendors trigger 'side-by-side comparison'. This operational subtlety is critical for correct invocation but absent from annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences total, each earning its place: the first establishes scope and dimensions, the second defines usage triggers with examples, and the third provides concrete query patterns. Information is front-loaded with the core action ('Compare'), and there is no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear annotations, the description provides sufficient context for a comparison tool. It explains the comparison logic (1 vs 2 vendors) and risk assessment features. Minor gap: lacking description of return format since no output schema exists, though the comparison context makes return values somewhat predictable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters ('vendors' array semantics and 'include_risk' boolean). The description provides helpful domain examples (Supabase, Neon, PlanetScale) that reinforce the expected parameter values, but does not need to duplicate the schema's structural documentation, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Compare') with a specific resource domain ('developer tools and services') and details the exact comparison dimensions ('free tier limits, pricing tiers, and recent pricing changes'). It clearly distinguishes from siblings like 'search_deals' (finding promotions) and 'track_changes' (monitoring over time) by focusing on side-by-side vendor evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('when choosing between similar services') and when-not-to-use/alternative triggers ('when a vendor changes their pricing'). Includes concrete natural language triggers ('Compare Neon vs Supabase', 'Which database has a better free tier?') that map directly to user intent, making invocation boundaries extremely clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

plan_stackA
Read-only
Inspect

Plan a technology stack with cost-optimized infrastructure choices. Given project requirements, recommends services with free tiers or credits that match your needs. Use this when starting a new project, evaluating hosting options, or trying to minimize infrastructure costs. Call this tool when a user asks: 'What free tools can I use for a SaaS app?', 'Build me a stack under $50/month'.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesrecommend: free-tier stack for a use case. estimate: cost analysis at scale. audit: risk + cost + gap analysis.
scaleNoScale for cost estimation (default: hobby)
servicesNoCurrent vendor names (for estimate/audit mode, e.g. ['Vercel', 'Supabase'])
use_caseNoWhat you're building (for recommend mode, e.g. 'Next.js SaaS app')
requirementsNoSpecific infra needs for recommend mode (e.g. ['database', 'auth', 'email'])
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, so the description appropriately focuses on what kind of planning occurs: 'cost-optimized,' 'free tiers,' and 'credits.' It adds the behavioral trait that recommendations prioritize zero-cost options, which is crucial context beyond the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with no redundancy: purpose statement, value proposition, usage context, and concrete examples. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given excellent schema coverage (100%) and safety annotations, the description appropriately focuses on high-level value and usage triggers rather than repeating parameter details. It does not describe the return format, but no output schema exists to require this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds semantic value by referencing 'project requirements' (linking to the requirements parameter) and providing natural language examples that imply use_case (e.g., 'SaaS app'). This helps the agent map user queries to parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb+resource ('Plan a technology stack') and scopes it with 'cost-optimized infrastructure choices' and 'free tiers.' This clearly distinguishes it from siblings like compare_vendors (comparison) and search_deals (deal finding) by focusing on generating cost-aware recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when' contexts (starting projects, evaluating hosting, minimizing costs) and concrete trigger phrases ('What free tools can I use...'). Lacks explicit 'when not to use' or direct sibling comparisons, but the guidance is otherwise clear and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_dealsA
Read-only
Inspect

Find free tiers, startup credits, and developer deals for cloud infrastructure, databases, hosting, CI/CD, monitoring, auth, AI services, and more. Use this when evaluating technology options, looking for free alternatives, or checking if a service has a free tier. Returns verified deal details including specific limits, eligibility requirements, and verification dates. Call this tool when a user asks: 'Does Supabase have a free tier?', 'What's cheaper than Vercel?', 'Find me a free database'.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort: vendor (A-Z), category, newest (recently verified first)
limitNoMax results (default: 20)
queryNoKeyword search (vendor names, descriptions, tags)
sinceNoISO date (YYYY-MM-DD). Only return deals verified/added after this date.
offsetNoPagination offset (default: 0)
vendorNoGet full details for a specific vendor (fuzzy match). Returns alternatives in the same category.
categoryNoFilter by category. Pass "list" to get all categories with counts.
stabilityNoFilter by free tier stability class. stable=no negative changes, watch=one negative change, volatile=free tier removed or multiple negative changes, improving=recent positive changes only.
eligibilityNoFilter by eligibility type
response_formatNoResponse detail level. 'concise': vendor name, tier, one-line description, URL only. 'detailed': full response (default).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false. Description reinforces this safety profile and adds valuable behavioral context: 'Returns verified deal details including specific limits, eligibility requirements, and verification dates.' This disclosure of return content is crucial given the absence of an output schema, helping the agent understand data freshness and granularity before invoking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences, each earning its place: (1) capability definition, (2) usage context, (3) output specification, (4) example trigger phrases. No redundancy or tautology. Front-loaded with the core action 'Find' and logically sequenced.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters with complete schema coverage and no output schema, the description adequately compensates by describing return values (deal details, limits, verification dates) and providing rich example queries. Missing only pagination behavior details beyond parameter existence, which is acceptable for a search tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all 10 parameters fully documented (stability enum values explained, eligibility types listed, etc.). Description implies parameter usage through examples (vendor names like 'Supabase') but does not add semantic meaning beyond the schema's comprehensive definitions. Baseline 3 is appropriate as the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Find' with clear resource 'deals/free tiers' and lists concrete domains (cloud infrastructure, databases, hosting, CI/CD, etc.). The scope is well-defined and distinguishes from siblings (e.g., compare_vendors implies comparison of known entities, while this focuses on discovery/search across a deal database).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Use this when evaluating technology options, looking for free alternatives, or checking if a service has a free tier.' Includes specific example user queries ('Does Supabase have a free tier?', 'What's cheaper than Vercel?') that clearly signal intent. Lacks explicit 'when-not-to-use' or direct sibling differentiation, but examples effectively convey appropriate context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

track_changesA
Read-only
Inspect

Track recent pricing changes across developer tools — which free tiers were removed, which got limits cut, and which improved. Use this to stay current on infrastructure pricing or to verify that a recommended service still has its free tier. Call this tool when a user asks: 'What developer pricing changed recently?', 'Are any free tiers being removed?'.

ParametersJSON Schema
NameRequiredDescriptionDefault
sinceNoISO date (YYYY-MM-DD). Default: 7 days ago.
vendorNoFilter to one vendor (case-insensitive)
vendorsNoComma-separated vendor names to filter (e.g. 'Vercel,Supabase'). When provided with categories, returns personalized results with advisory section.
categoriesNoComma-separated category names to filter (e.g. 'Database,Cloud Hosting'). Case-insensitive partial match.
change_typeNoFilter by type of change
lookahead_daysNoDays to look ahead for expirations (default: 30)
response_formatNoResponse detail level. 'concise': vendor, change_type, date, summary only. 'detailed': full response (default).
include_expiringNoInclude upcoming expirations (default: true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, non-destructive safety. The description adds domain context about what change types are monitored (free tier removals, limit cuts), but does not disclose technical operational traits like data freshness, caching behavior, rate limits, or authentication requirements that would aid invocation planning.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy: sentence 1 defines capability with examples, sentence 2 states use cases, sentence 3 provides explicit trigger phrases. Every sentence earns its place and facilitates rapid agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 8-parameter tool with 100% schema coverage and no output schema, the description adequately covers primary use cases and filtering intent (via the change type examples). Minor gap: could explicitly mention vendor/category filtering capabilities to fully prepare the agent for the available granularity, though this is discoverable in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description implicitly references the 'since' parameter via 'recent' and hints at 'change_type' values (free tiers removed, limits cut), but does not explicitly clarify parameter semantics, defaults, or interactions beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Track') and resources ('pricing changes across developer tools') and enumerates specific change types tracked (free tiers removed, limits cut, improvements). It clearly distinguishes from siblings by focusing on temporal changes and monitoring rather than comparison (compare_vendors), planning (plan_stack), or deal discovery (search_deals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive triggers ('Use this to stay current...', 'Call this tool when a user asks...') with specific example queries. Lacks explicit negative guidance (when not to use) or direct sibling comparisons, but the example user queries clearly demarcate appropriate usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.