Skip to main content
Glama

Server Details

Search 400k+ SaaS and software companies by category, technology, country, pricing, and more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 4 of 4 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: SearchAlternativesTool finds competing products for a host, SearchCategoriesTool searches categories for filtering, SearchSaasTool searches the main SaaS database, and SearchTechnologiesTool searches technologies for filtering. The descriptions make it easy to differentiate their functions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern starting with 'Search' (e.g., SearchAlternativesTool, SearchCategoriesTool). This predictable naming scheme makes the set easy to navigate and understand.

Tool Count4/5

With 4 tools, the count is reasonable for a SaaS search domain, but it feels slightly thin as it lacks operations for deeper interactions like retrieving detailed profiles or managing saved searches. However, it covers core search functionalities adequately.

Completeness3/5

The toolset provides good search capabilities but has notable gaps: there are no tools for CRUD operations (e.g., creating or updating entries), retrieving detailed information beyond basic search results, or handling user-specific data like bookmarks. This limits agents to surface-level queries without full lifecycle coverage.

Available Tools

4 tools
SearchAlternativesToolCInspect

Find alternative/competing SaaS or software products for a given website host. Returns up to 25 published alternatives with profile URLs, descriptions, and names.

ParametersJSON Schema
NameRequiredDescriptionDefault
hostYesThe website host to find alternatives for (e.g. "slack.com", "trello.com")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns 'up to 25 published alternatives,' hinting at a limit, but does not cover other critical aspects like rate limits, authentication needs, error handling, or whether the operation is read-only or has side effects. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details (e.g., output limit and content). There is no wasted text, making it appropriately concise, though it could be slightly more structured with bullet points or separation of concepts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and output format but lacks behavioral context, usage guidelines, and error handling information. Without annotations or an output schema, more detail would improve completeness for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'host' parameter clearly documented. The description adds minimal value beyond the schema by implying the host is used to find alternatives, but it does not provide additional syntax, format details, or examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Find') and resource ('alternative/competing SaaS or software products'), specifying the scope ('for a given website host') and output details ('up to 25 published alternatives with profile URLs, descriptions, and names'). However, it does not explicitly differentiate from sibling tools like SearchCategoriesTool or SearchTechnologiesTool, which might have overlapping domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as SearchSaasTool or SearchCategoriesTool. The description implies usage for finding software alternatives based on a host, but lacks explicit context, exclusions, or comparisons to sibling tools, leaving the agent to infer appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchCategoriesToolAInspect

Search SaaS Browser categories by name or keyword. Returns matching category IDs for use with the SearchSaasTool category_ids filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and output purpose (category IDs for filtering), but doesn't mention important behavioral aspects like whether this is a read-only operation, performance characteristics, rate limits, or authentication requirements. The description adds some context but leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve a clear purpose: the first states what the tool does, and the second explains how the output is used. There's zero wasted language and the information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with one parameter and no output schema, the description provides good context about what it searches and how the results are used. However, without annotations or output schema, it doesn't describe the return format (e.g., list structure, error cases) or behavioral constraints. The description is reasonably complete but could benefit from more operational details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage with the 'q' parameter clearly documented as 'Search query'. The description adds minimal value beyond the schema by specifying what the search targets ('SaaS Browser categories by name or keyword'), but doesn't provide additional parameter context like search syntax, case sensitivity, or matching algorithms. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search SaaS Browser categories by name or keyword') and resource ('categories'), and distinguishes it from siblings by mentioning its output is used with SearchSaasTool's category_ids filter. This provides clear differentiation from other search tools like SearchAlternativesTool or SearchTechnologiesTool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Search SaaS Browser categories by name or keyword') and provides a clear alternative usage context ('for use with the SearchSaasTool category_ids filter'). This gives the agent specific guidance on both primary usage and integration with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchSaasToolCInspect

Search the SaaS Browser database of 400k+ SaaS companies. Filter by category, technology, country, pricing, traffic, employees, age, and more. Returns up to 25 results with profile URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoSearch query
ads_maxNoMax ads count
ads_minNoMin ads count
sort_byNoSort: domain_rank, employees, age, traffic, traffic_growth, ad_keywords, ad_keyword_growth, ads, ads_growth, referring_domains, referring_domains_growth, commission_percentage, published_at, sitemap_page_count
uses_aiNo"true" or "false"
countriesNoPipe-separated 2-letter ISO codes. Use saas://countries for valid codes.
price_lowNoMinimum monthly price
traff_maxNoMax monthly traffic
traff_minNoMin monthly traffic
price_highNoMaximum monthly price
category_idsNoPipe-separated category IDs. Use saas://categories for valid IDs.
age_years_maxNoMax company age in years
age_years_minNoMin company age in years
employees_gteNoMin employee count
employees_lteNoMax employee count
growth_modelsNoPipe-separated: product_led, sales_led, both
consumer_typesNoPipe-separated: personal, business, both
has_api_accessNo"true" or "false"
has_bug_bountyNo"true" or "false"
sort_directionNo"asc" or "desc"
technology_idsNoPipe-separated technology UUIDs. Use saas://technologies for valid IDs.
ad_keywords_maxNoMax ad keywords
ad_keywords_minNoMin ad keywords
domain_rank_gteNoMin Serpstat domain rank
domain_rank_lteNoMax Serpstat domain rank
published_at_toNoPublished before (YYYY-MM-DD)
technology_logicNo"all" (AND) or "any" (OR)
published_at_fromNoPublished after (YYYY-MM-DD)
bug_bounty_platformNoPipe-separated: hackerone, bugcrowd, intigriti, yeswehack, immunefi, synack, cobalt, self_hosted
cookie_duration_maxNoMax affiliate cookie days
cookie_duration_minNoMin affiliate cookie days
price_currency_codeNoPipe-separated 3-letter codes
has_chrome_extensionNo"true" or "false"
bug_bounty_payout_maxNoMax bug bounty payout (USD)
bug_bounty_payout_minNoMin bug bounty payout (USD)
has_affiliate_programNo"true" or "false"
has_firefox_extensionNo"true" or "false"
referring_domains_maxNoMax referring domains
referring_domains_minNoMin referring domains
monthly_change_ads_maxNoMax ads change %
monthly_change_ads_minNoMin ads change %
sitemap_page_count_maxNoMax sitemap pages
sitemap_page_count_minNoMin sitemap pages
monthly_change_traff_maxNoMax traffic change %
monthly_change_traff_minNoMin traffic change %
affiliate_commission_typeNoPipe-separated: one_time, recurring
commission_percentage_maxNoMax affiliate commission %
commission_percentage_minNoMin affiliate commission %
referring_domains_growth_maxNoMax referring domains change %
referring_domains_growth_minNoMin referring domains change %
monthly_change_ad_keywords_maxNoMax ad keywords change %
monthly_change_ad_keywords_minNoMin ad keywords change %
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the database size (400k+ companies) and result limit (25 results with profile URLs), which is helpful. However, it doesn't address critical behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or pagination behavior beyond the 25-result limit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences that cover the core functionality and key constraints. The first sentence establishes purpose and scope, while the second provides important behavioral context (result limit and output format). There's minimal wasted verbiage, though it could be slightly more front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex search tool with 52 parameters and no output schema, the description is inadequate. It doesn't explain what the 25 results contain beyond 'profile URLs', doesn't describe the search ranking or relevance algorithm, and provides no guidance on how to effectively use the numerous filtering parameters. The absence of annotations means the description should compensate more for behavioral transparency gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 52 parameters thoroughly. The description adds minimal value beyond what's in the schema - it lists some filter categories but doesn't provide additional context about parameter interactions, default behaviors, or practical usage examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Search') and resource ('SaaS Browser database of 400k+ SaaS companies'), and mentions the scope of filtering capabilities. However, it doesn't explicitly differentiate from sibling tools like SearchAlternativesTool, SearchCategoriesTool, or SearchTechnologiesTool, which likely search different aspects of the same database.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings. It mentions filtering capabilities but doesn't indicate whether this is the primary search tool or how it relates to alternatives like SearchCategoriesTool. There's no mention of prerequisites, limitations, or typical use cases beyond the basic functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchTechnologiesToolAInspect

Search SaaS Browser technologies by name or category. Returns matching technology IDs for use with the SearchSaasTool technology_ids filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the search functionality and output ('Returns matching technology IDs'), but lacks details on behavioral traits such as rate limits, error handling, or authentication requirements. The description adds some context about the output's use but does not fully compensate for the absence of annotations, making it adequate but with clear gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, consisting of two sentences that efficiently convey the tool's purpose and usage without any wasted words. Every sentence earns its place by providing essential information, making it well-structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete, covering purpose, usage, and output context. However, it lacks details on behavioral aspects like error handling or rate limits, which are relevant even for simple tools. The absence of an output schema means the description should ideally explain return values more thoroughly, but it does adequately for the given context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the parameter 'q' documented as 'Search query'. The description adds no additional meaning beyond this, as it does not specify query syntax, examples, or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema does the heavy lifting without extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search SaaS Browser technologies'), resource ('technologies'), and scope ('by name or category'), distinguishing it from sibling tools like SearchAlternativesTool or SearchCategoriesTool. It explicitly mentions what it returns ('matching technology IDs') and their intended use ('for use with the SearchSaasTool technology_ids filter'), providing a complete and differentiated purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Search SaaS Browser technologies by name or category') and provides a clear alternative context by naming a sibling tool ('for use with the SearchSaasTool technology_ids filter'), indicating this tool is a precursor to SearchSaasTool. It effectively guides usage by specifying its role in the workflow without misleading or omitting key alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources