SaaS Browser
Server Details
Search 400k+ SaaS and software companies by category, technology, country, pricing, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 4 of 4 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose with no overlap: SearchAlternativesTool finds competing products for a host, SearchCategoriesTool searches categories for filtering, SearchSaasTool searches the main SaaS database, and SearchTechnologiesTool searches technologies for filtering. The descriptions make it easy to differentiate their functions.
All tool names follow a consistent verb_noun pattern starting with 'Search' (e.g., SearchAlternativesTool, SearchCategoriesTool). This predictable naming scheme makes the set easy to navigate and understand.
With 4 tools, the count is reasonable for a SaaS search domain, but it feels slightly thin as it lacks operations for deeper interactions like retrieving detailed profiles or managing saved searches. However, it covers core search functionalities adequately.
The toolset provides good search capabilities but has notable gaps: there are no tools for CRUD operations (e.g., creating or updating entries), retrieving detailed information beyond basic search results, or handling user-specific data like bookmarks. This limits agents to surface-level queries without full lifecycle coverage.
Available Tools
4 toolsSearchAlternativesToolCInspect
Find alternative/competing SaaS or software products for a given website host. Returns up to 25 published alternatives with profile URLs, descriptions, and names.
| Name | Required | Description | Default |
|---|---|---|---|
| host | Yes | The website host to find alternatives for (e.g. "slack.com", "trello.com") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns 'up to 25 published alternatives,' hinting at a limit, but does not cover other critical aspects like rate limits, authentication needs, error handling, or whether the operation is read-only or has side effects. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and key details (e.g., output limit and content). There is no wasted text, making it appropriately concise, though it could be slightly more structured with bullet points or separation of concepts.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (one parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and output format but lacks behavioral context, usage guidelines, and error handling information. Without annotations or an output schema, more detail would improve completeness for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'host' parameter clearly documented. The description adds minimal value beyond the schema by implying the host is used to find alternatives, but it does not provide additional syntax, format details, or examples. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Find') and resource ('alternative/competing SaaS or software products'), specifying the scope ('for a given website host') and output details ('up to 25 published alternatives with profile URLs, descriptions, and names'). However, it does not explicitly differentiate from sibling tools like SearchCategoriesTool or SearchTechnologiesTool, which might have overlapping domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as SearchSaasTool or SearchCategoriesTool. The description implies usage for finding software alternatives based on a host, but lacks explicit context, exclusions, or comparisons to sibling tools, leaving the agent to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchCategoriesToolAInspect
Search SaaS Browser categories by name or keyword. Returns matching category IDs for use with the SearchSaasTool category_ids filter.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and output purpose (category IDs for filtering), but doesn't mention important behavioral aspects like whether this is a read-only operation, performance characteristics, rate limits, or authentication requirements. The description adds some context but leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a clear purpose: the first states what the tool does, and the second explains how the output is used. There's zero wasted language and the information is front-loaded with the core functionality stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one parameter and no output schema, the description provides good context about what it searches and how the results are used. However, without annotations or output schema, it doesn't describe the return format (e.g., list structure, error cases) or behavioral constraints. The description is reasonably complete but could benefit from more operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage with the 'q' parameter clearly documented as 'Search query'. The description adds minimal value beyond the schema by specifying what the search targets ('SaaS Browser categories by name or keyword'), but doesn't provide additional parameter context like search syntax, case sensitivity, or matching algorithms. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search SaaS Browser categories by name or keyword') and resource ('categories'), and distinguishes it from siblings by mentioning its output is used with SearchSaasTool's category_ids filter. This provides clear differentiation from other search tools like SearchAlternativesTool or SearchTechnologiesTool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Search SaaS Browser categories by name or keyword') and provides a clear alternative usage context ('for use with the SearchSaasTool category_ids filter'). This gives the agent specific guidance on both primary usage and integration with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchSaasToolCInspect
Search the SaaS Browser database of 400k+ SaaS companies. Filter by category, technology, country, pricing, traffic, employees, age, and more. Returns up to 25 results with profile URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Search query | |
| ads_max | No | Max ads count | |
| ads_min | No | Min ads count | |
| sort_by | No | Sort: domain_rank, employees, age, traffic, traffic_growth, ad_keywords, ad_keyword_growth, ads, ads_growth, referring_domains, referring_domains_growth, commission_percentage, published_at, sitemap_page_count | |
| uses_ai | No | "true" or "false" | |
| countries | No | Pipe-separated 2-letter ISO codes. Use saas://countries for valid codes. | |
| price_low | No | Minimum monthly price | |
| traff_max | No | Max monthly traffic | |
| traff_min | No | Min monthly traffic | |
| price_high | No | Maximum monthly price | |
| category_ids | No | Pipe-separated category IDs. Use saas://categories for valid IDs. | |
| age_years_max | No | Max company age in years | |
| age_years_min | No | Min company age in years | |
| employees_gte | No | Min employee count | |
| employees_lte | No | Max employee count | |
| growth_models | No | Pipe-separated: product_led, sales_led, both | |
| consumer_types | No | Pipe-separated: personal, business, both | |
| has_api_access | No | "true" or "false" | |
| has_bug_bounty | No | "true" or "false" | |
| sort_direction | No | "asc" or "desc" | |
| technology_ids | No | Pipe-separated technology UUIDs. Use saas://technologies for valid IDs. | |
| ad_keywords_max | No | Max ad keywords | |
| ad_keywords_min | No | Min ad keywords | |
| domain_rank_gte | No | Min Serpstat domain rank | |
| domain_rank_lte | No | Max Serpstat domain rank | |
| published_at_to | No | Published before (YYYY-MM-DD) | |
| technology_logic | No | "all" (AND) or "any" (OR) | |
| published_at_from | No | Published after (YYYY-MM-DD) | |
| bug_bounty_platform | No | Pipe-separated: hackerone, bugcrowd, intigriti, yeswehack, immunefi, synack, cobalt, self_hosted | |
| cookie_duration_max | No | Max affiliate cookie days | |
| cookie_duration_min | No | Min affiliate cookie days | |
| price_currency_code | No | Pipe-separated 3-letter codes | |
| has_chrome_extension | No | "true" or "false" | |
| bug_bounty_payout_max | No | Max bug bounty payout (USD) | |
| bug_bounty_payout_min | No | Min bug bounty payout (USD) | |
| has_affiliate_program | No | "true" or "false" | |
| has_firefox_extension | No | "true" or "false" | |
| referring_domains_max | No | Max referring domains | |
| referring_domains_min | No | Min referring domains | |
| monthly_change_ads_max | No | Max ads change % | |
| monthly_change_ads_min | No | Min ads change % | |
| sitemap_page_count_max | No | Max sitemap pages | |
| sitemap_page_count_min | No | Min sitemap pages | |
| monthly_change_traff_max | No | Max traffic change % | |
| monthly_change_traff_min | No | Min traffic change % | |
| affiliate_commission_type | No | Pipe-separated: one_time, recurring | |
| commission_percentage_max | No | Max affiliate commission % | |
| commission_percentage_min | No | Min affiliate commission % | |
| referring_domains_growth_max | No | Max referring domains change % | |
| referring_domains_growth_min | No | Min referring domains change % | |
| monthly_change_ad_keywords_max | No | Max ad keywords change % | |
| monthly_change_ad_keywords_min | No | Min ad keywords change % |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the database size (400k+ companies) and result limit (25 results with profile URLs), which is helpful. However, it doesn't address critical behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or pagination behavior beyond the 25-result limit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that cover the core functionality and key constraints. The first sentence establishes purpose and scope, while the second provides important behavioral context (result limit and output format). There's minimal wasted verbiage, though it could be slightly more front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex search tool with 52 parameters and no output schema, the description is inadequate. It doesn't explain what the 25 results contain beyond 'profile URLs', doesn't describe the search ranking or relevance algorithm, and provides no guidance on how to effectively use the numerous filtering parameters. The absence of annotations means the description should compensate more for behavioral transparency gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 52 parameters thoroughly. The description adds minimal value beyond what's in the schema - it lists some filter categories but doesn't provide additional context about parameter interactions, default behaviors, or practical usage examples. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Search') and resource ('SaaS Browser database of 400k+ SaaS companies'), and mentions the scope of filtering capabilities. However, it doesn't explicitly differentiate from sibling tools like SearchAlternativesTool, SearchCategoriesTool, or SearchTechnologiesTool, which likely search different aspects of the same database.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings. It mentions filtering capabilities but doesn't indicate whether this is the primary search tool or how it relates to alternatives like SearchCategoriesTool. There's no mention of prerequisites, limitations, or typical use cases beyond the basic functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
SearchTechnologiesToolAInspect
Search SaaS Browser technologies by name or category. Returns matching technology IDs for use with the SearchSaasTool technology_ids filter.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the search functionality and output ('Returns matching technology IDs'), but lacks details on behavioral traits such as rate limits, error handling, or authentication requirements. The description adds some context about the output's use but does not fully compensate for the absence of annotations, making it adequate but with clear gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise, consisting of two sentences that efficiently convey the tool's purpose and usage without any wasted words. Every sentence earns its place by providing essential information, making it well-structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete, covering purpose, usage, and output context. However, it lacks details on behavioral aspects like error handling or rate limits, which are relevant even for simple tools. The absence of an output schema means the description should ideally explain return values more thoroughly, but it does adequately for the given context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'q' documented as 'Search query'. The description adds no additional meaning beyond this, as it does not specify query syntax, examples, or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema does the heavy lifting without extra value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search SaaS Browser technologies'), resource ('technologies'), and scope ('by name or category'), distinguishing it from sibling tools like SearchAlternativesTool or SearchCategoriesTool. It explicitly mentions what it returns ('matching technology IDs') and their intended use ('for use with the SearchSaasTool technology_ids filter'), providing a complete and differentiated purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Search SaaS Browser technologies by name or category') and provides a clear alternative context by naming a sibling tool ('for use with the SearchSaasTool technology_ids filter'), indicating this tool is a precursor to SearchSaasTool. It effectively guides usage by specifying its role in the workflow without misleading or omitting key alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!