brave
Server Details
Visit https://brave.com/search/api/ for a free API key. Search the web, local businesses, images,…
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- brave/brave-search-mcp-server
- GitHub Stars
- 880
- Server Listing
- Brave Search MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
6 toolsbrave_image_searchbrave_image_searchAInspect
Performs an image search using the Brave Search API. Helpful for when you need pictures of people, places, things, graphic design ideas, art inspiration, and more. When relaying results in a markdown environment, it may be helpful to include images in the results (e.g., ).| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results (1-200, default 50). Combine this parameter with `offset` to paginate search results. | |
| query | Yes | The user's search query. Query cannot be empty. Limited to 400 characters and 50 words. | |
| country | No | Search query country, where the results come from. The country string is limited to 2 character country codes of supported countries. | US |
| safesearch | No | Filters search results for adult content. The following values are supported: 'off' - No filtering. 'strict' - Drops all adult content from search results. | strict |
| spellcheck | No | Whether to spellcheck provided query. | |
| search_lang | No | Search language preference. The 2 or more character language code for which the search results are provided. | en |
Output Schema
| Name | Required | Description |
|---|---|---|
| type | Yes | |
| count | Yes | |
| items | Yes | |
| might_be_offensive | Yes | Whether the image might be offensive. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only provide openWorldHint. The description adds valuable behavioral context: it identifies the external API (Brave Search), and crucially provides specific guidance on output formatting (''), revealing the structure of returned results not evident in the input schema. Does not cover rate limits or error handling, but provides actionable usage guidance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with no waste. Front-loaded with the core action, followed by use cases, then a specific implementation tip for markdown rendering. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (per context signals), the description appropriately omits detailed return value explanations. It covers the tool's purpose, usage contexts, and provides a concrete code example for handling results. With 100% schema coverage and existing annotations, the description is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 6 parameters (query, count, country, safesearch, spellcheck, search_lang) fully documented in the schema. The description does not add parameter-specific guidance, so baseline 3 is appropriate per scoring rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool 'Performs an image search using the Brave Search API' with specific verb and resource. It distinguishes from siblings (brave_web_search, brave_video_search, etc.) by specifying 'image search' and providing concrete use cases like 'pictures of people, places, things, graphic design ideas, art inspiration'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance on when to use ('when you need pictures of people, places, things...'), helping the agent identify appropriate contexts. Lacks explicit 'when not to use' or named alternatives (e.g., 'use brave_web_search for text results'), but the use case examples effectively communicate the scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
brave_local_searchbrave_local_searchAInspect
Brave Local Search API provides enrichments for location search results. Access to this API is available only through the Brave Search API Pro plans; confirm the user's plan before using this tool (if the user does not have a Pro plan, use the brave_web_search tool). Searches for local businesses and places using Brave's Local Search API. Best for queries related to physical locations, businesses, restaurants, services, etc.
Returns detailed information including:
- Business names and addresses
- Ratings and review counts
- Phone numbers and opening hours
Use this when the query implies 'near me', 'in my area', or mentions specific locations (e.g., 'in San Francisco'). This tool automatically falls back to brave_web_search if no local results are found.| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results (1-20, default 10). Applies only to web search results (i.e., has no effect on locations, news, videos, etc.) | |
| query | Yes | Search query (max 400 chars, 50 words) | |
| units | No | The measurement units. If not provided, units are derived from search country. | |
| offset | No | Pagination offset (max 9, default 0) | |
| country | No | Search query country, where the results come from. The country string is limited to 2 character country codes of supported countries. | US |
| goggles | No | Goggles act as a custom re-ranking on top of Brave's search index. The parameter supports both a url where the Goggle is hosted or the definition of the Goggle. For more details, refer to the Goggles repository (i.e., https://github.com/brave/goggles-quickstart). | |
| summary | No | This parameter enables summary key generation in web search results. This is required for summarizer to be enabled. | |
| ui_lang | No | The language of the UI. The 2 or more character language code for which the search results are provided. | en-US |
| freshness | No | Filters search results by when they were discovered. The following values are supported: 'pd' - Discovered within the last 24 hours. 'pw' - Discovered within the last 7 days. 'pm' - Discovered within the last 31 days. 'py' - Discovered within the last 365 days. 'YYYY-MM-DDtoYYYY-MM-DD' - Timeframe is also supported by specifying the date range e.g. 2022-04-01to2022-07-30. | |
| safesearch | No | Filters search results for adult content. The following values are supported: 'off' - No filtering. 'moderate' - Filters explicit content (e.g., images and videos), but allows adult domains in search results. 'strict' - Drops all adult content from search results. The default value is 'moderate'. | moderate |
| spellcheck | No | Whether to spellcheck the provided query. | |
| search_lang | No | Search language preference. The 2 or more character language code for which the search results are provided. | en |
| result_filter | No | Result filter (default ['web', 'query']) | |
| extra_snippets | No | A snippet is an excerpt from a page you get as a result of the query, and extra_snippets allow you to get up to 5 additional, alternative excerpts. Only available under Free AI, Base AI, Pro AI, Base Data, Pro Data and Custom plans. | |
| text_decorations | No | Whether display strings (e.g. result snippets) should include decoration markers (e.g. highlighting characters). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only provide openWorldHint. The description adds critical behavioral context: the Pro plan access restriction (auth requirement), automatic fallback to brave_web_search if no results found, and detailed return structure (business names, ratings, hours). Deducted one point as it doesn't mention rate limits or caching behavior, though these may not be critical for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: access requirements, core purpose, return values, and usage triggers. Front-loaded with critical Pro plan warning. Minor verbosity in opening sentence repeating 'Brave Local Search API' and 'enrichments,' but overall efficient for the complexity (15 parameters, fallback logic, auth requirements).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong completeness given no output schema exists. The description compensates by listing specific return fields (addresses, ratings, hours). Covers authentication, sibling differentiation, and fallback behavior. Deducted one point as it could briefly acknowledge the primary 'query' parameter or pagination behavior explicitly, though this is minor given schema completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description implies the query parameter through usage examples but does not add semantic context, syntax guidance, or parameter relationships beyond what the schema already provides. No penalty since schema is comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Searches for local businesses and places using Brave's Local Search API' with specific scope (physical locations, businesses, restaurants) and distinguishes from brave_web_search by noting the Pro plan requirement and local focus. It also details specific return values (ratings, hours, phone numbers) clarifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: states 'confirm the user's plan before using this tool (if the user does not have a Pro plan, use the brave_web_search tool)' providing a clear alternative, and specifies exact trigger phrases ('near me', 'in my area', 'in San Francisco') for when to select this over siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
brave_news_searchbrave_news_searchAInspect
This tool searches for news articles using Brave's News Search API based on the user's query. Use it when you need current news information, breaking news updates, or articles about specific topics, events, or entities.
When to use:
- Finding recent news articles on specific topics
- Getting breaking news updates
- Researching current events or trending stories
- Gathering news sources and headlines for analysis
Returns a JSON list of news-related results with title, url, and description. Some results may contain snippets of text from the article.
When relaying results in markdown-supporting environments, always cite sources with hyperlinks.
Examples:
- "According to [Reuters](https://www.reuters.com/technology/china-bans/), China bans uncertified and recalled power banks on planes".
- "The [New York Times](https://www.nytimes.com/2025/06/27/us/technology/ev-sales.html) reports that Tesla's EV sales have increased by 20%".
- "According to [BBC News](https://www.bbc.com/news/world-europe-65910000), the UK government has announced a new policy to support renewable energy".| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results (1-50, default 20) | |
| query | Yes | Search query (max 400 chars, 50 words) | |
| offset | No | Pagination offset (max 9, default 0) | |
| country | No | Search query country, where the results come from. The country string is limited to 2 character country codes of supported countries. | US |
| goggles | No | Goggles act as a custom re-ranking on top of Brave's search index. The parameter supports both a url where the Goggle is hosted or the definition of the Goggle. For more details, refer to the Goggles repository (i.e., https://github.com/brave/goggles-quickstart). | |
| ui_lang | No | User interface language preferred in response. Usually of the format <language_code>-<country_code>. For more, see RFC 9110. | en-US |
| freshness | No | Filters search results by when they were discovered. The following values are supported: 'pd' - Discovered within the last 24 hours. 'pw' - Discovered within the last 7 Days. 'pm' - Discovered within the last 31 Days. 'py' - Discovered within the last 365 Days. 'YYYY-MM-DDtoYYYY-MM-DD' - Timeframe is also supported by specifying the date range e.g. 2022-04-01to2022-07-30. | pd |
| safesearch | No | Filters search results for adult content. The following values are supported: 'off' - No filtering. 'moderate' - Filter out explicit content. 'strict' - Filter out explicit and suggestive content. The default value is 'moderate'. | moderate |
| spellcheck | No | Whether to spellcheck provided query. | |
| search_lang | No | Search language preference. The 2 or more character language code for which the search results are provided. | en |
| extra_snippets | No | A snippet is an excerpt from a page you get as a result of the query, and extra_snippets allow you to get up to 5 additional, alternative excerpts. Only available under Free AI, Base AI, Pro AI, Base Data, Pro Data and Custom plans. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only provide openWorldHint=true. The description adds valuable behavioral context: it discloses the return format (JSON list with title/url/description), mentions that snippets may be included, and provides specific citation formatting requirements for markdown output. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (Purpose → When to use → Returns → Citation guidance → Examples). Front-loaded with the core function. However, three full citation examples create verbosity; while helpful for format clarity, they could be condensed or the third example omitted without losing value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description appropriately explains return values (JSON structure with title/url/description) and adds crucial citation requirements for news attribution. Given the complexity (11 params) and lack of output schema, it adequately covers behavioral context, though it could briefly mention pagination concepts (offset parameter exists in schema but conceptual explanation is absent).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage across all 11 parameters, the schema carries the full semantic load. The description references 'based on the user's query' implying the required query parameter, but adds no additional parameter guidance beyond what the comprehensive schema already provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it 'searches for news articles using Brave's News Search API'—specific verb, resource, and provider. It clearly differentiates from siblings (brave_web_search, brave_image_search, etc.) by focusing specifically on 'news articles' and 'current news information' rather than general web or media content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'When to use' section with four specific scenarios (recent articles, breaking news, current events, news analysis). However, it lacks explicit guidance on when NOT to use this tool versus alternatives like brave_web_search, and doesn't mention prerequisites or rate limits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
brave_summarizerbrave_summarizerAInspect
Retrieves AI-generated summaries of web search results using Brave's Summarizer API. This tool processes search results to create concise, coherent summaries of information gathered from multiple sources.
When to use:
- When you need a concise overview of complex topics from multiple sources
- For quick fact-checking or getting key points without reading full articles
- When providing users with summarized information that synthesizes various perspectives
- For research tasks requiring distilled information from web searches
Returns a text summary that consolidates information from the search results. Optional features include inline references to source URLs and additional entity information.
Requirements: Must first perform a web search using brave_web_search with summary=true parameter. Requires a Pro AI subscription to access the summarizer functionality.| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | The key is equal to value of field key as part of the Summarizer response model. | |
| entity_info | No | Returns extra entities info with the summary response. | |
| inline_references | No | Adds inline references to the summary response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only provide openWorldHint=true. The description adds significant behavioral context: the Pro AI subscription requirement (auth), the dependency chain on brave_web_search, and the output format ('text summary that consolidates information'). It mentions optional features (inline references, entity info) that explain behavioral variations. Lacks rate limits or error conditions, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, when to use, returns, requirements). Front-loaded with the core function. No redundant text—every sentence provides specific guidance on usage constraints, prerequisites, or output characteristics. Appropriate length for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description appropriately explains return values ('text summary'). With 100% schema coverage and only 3 simple parameters, the description compensates by explaining the critical prerequisite workflow and subscription requirements. Complete for a tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). The description adds value by explaining what the boolean parameters actually do in context: 'inline references to source URLs' maps to inline_references, and 'additional entity information' maps to entity_info. It implies the key's origin through the workflow description, though explicit parameter semantics for 'key' would strengthen this further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Retrieves') and clear resource ('AI-generated summaries of web search results using Brave's Summarizer API'). It effectively distinguishes from sibling tools by specifying this processes existing search results rather than performing raw searches like brave_web_search or media-specific searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit 'When to use:' section with four specific scenarios. Crucially, it states the prerequisite workflow: 'Must first perform a web search using brave_web_search with summary=true parameter,' explicitly naming the sibling tool required for operation. Also notes the subscription requirement, preventing inappropriate invocations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
brave_video_searchbrave_video_searchAInspect
Searches for videos using Brave's Video Search API and returns structured video results with metadata.
When to use:
- When you need to find videos related to a specific topic, keyword, or query.
- Useful for discovering video content, getting video metadata, or finding videos from specific creators/publishers.
Returns a JSON list of video-related results with title, url, description, duration, and thumbnail_url.| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results (1-50, default 20). Combine this parameter with `offset` to paginate search results. | |
| query | Yes | The user's search query. Query cannot be empty. Limited to 400 characters and 50 words. | |
| offset | No | Pagination offset (max 9, default 0). Combine this parameter with `count` to paginate search results. | |
| country | No | Search query country, where the results come from. The country string is limited to 2 character country codes of supported countries. | US |
| ui_lang | No | User interface language preferred in response. Usually of the format <language_code>-<country_code>. For more, see RFC 9110. | en-US |
| freshness | No | Filters search results by when they were discovered. The following values are supported: 'pd' - Discovered within the last 24 hours. 'pw' - Discovered within the last 7 days. 'pm' - Discovered within the last 31 days. 'py' - Discovered within the last 365 days. 'YYYY-MM-DDtoYYYY-MM-DD' - timeframe is also supported by specifying the date range (e.g. '2022-04-01to2022-07-30'). | |
| safesearch | No | Filters search results for adult content. The following values are supported: 'off' - No filtering. 'moderate' - Filter out explicit content. 'strict' - Filter out explicit and suggestive content. The default value is 'moderate'. | moderate |
| spellcheck | No | Whether to spellcheck provided query. | |
| search_lang | No | Search language preference. The 2 or more character language code for which the search results are provided. | en |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description effectively supplements the openWorldHint annotation by confirming the use of 'Brave's Video Search API' and detailing the return format ('JSON list of video-related results with title, url, description, duration, and thumbnail_url') since no output schema is provided. It does not mention rate limits or authentication requirements, but aligns well with the external resource hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with three distinct sections: purpose statement, usage guidelines, and return value specification. It is appropriately sized with no redundant text—every sentence provides necessary context not duplicated in the schema or annotations. Information is front-loaded with the core purpose stated immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (9 parameters, 100% coverage) and lack of output schema, the description adequately compensates by describing the JSON return structure. It covers tool purpose, usage scenarios, and output format. Could be improved by mentioning pagination behavior or country/language filtering strategies, but is sufficiently complete for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema comprehensively documents all 9 parameters including constraints and defaults. The description provides no additional parameter semantics, which is acceptable given the baseline of 3 for high schema coverage, though it could have highlighted important parameters like 'freshness' or pagination patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Searches for videos using Brave's Video Search API and returns structured video results with metadata,' providing specific verb and resource. However, it does not explicitly distinguish from sibling tools like brave_web_search or brave_image_search, though the 'When to use' section implies video-specific use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a dedicated 'When to use:' section with specific scenarios ('find videos related to a specific topic,' 'discovering video content'). This provides clear positive guidance for the agent. Lacks explicit 'when not to use' guidance or mention of alternatives (e.g., brave_web_search for general web pages), preventing a score of 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
brave_web_searchbrave_web_searchAInspect
Performs web searches using the Brave Search API and returns comprehensive search results with rich metadata.
When to use:
- General web searches for information, facts, or current topics
- Location-based queries (restaurants, businesses, points of interest)
- News searches for recent events or breaking stories
- Finding videos, discussions, or FAQ content
- Research requiring diverse result types (web pages, images, reviews, etc.)
Returns a JSON list of web results with title, description, and URL.
When the "results_filter" parameter is empty, JSON results may also contain FAQ, Discussions, News, and Video results.| Name | Required | Description | Default |
|---|---|---|---|
| count | No | Number of results (1-20, default 10). Applies only to web search results (i.e., has no effect on locations, news, videos, etc.) | |
| query | Yes | Search query (max 400 chars, 50 words) | |
| units | No | The measurement units. If not provided, units are derived from search country. | |
| offset | No | Pagination offset (max 9, default 0) | |
| country | No | Search query country, where the results come from. The country string is limited to 2 character country codes of supported countries. | US |
| goggles | No | Goggles act as a custom re-ranking on top of Brave's search index. The parameter supports both a url where the Goggle is hosted or the definition of the Goggle. For more details, refer to the Goggles repository (i.e., https://github.com/brave/goggles-quickstart). | |
| summary | No | This parameter enables summary key generation in web search results. This is required for summarizer to be enabled. | |
| ui_lang | No | The language of the UI. The 2 or more character language code for which the search results are provided. | en-US |
| freshness | No | Filters search results by when they were discovered. The following values are supported: 'pd' - Discovered within the last 24 hours. 'pw' - Discovered within the last 7 days. 'pm' - Discovered within the last 31 days. 'py' - Discovered within the last 365 days. 'YYYY-MM-DDtoYYYY-MM-DD' - Timeframe is also supported by specifying the date range e.g. 2022-04-01to2022-07-30. | |
| safesearch | No | Filters search results for adult content. The following values are supported: 'off' - No filtering. 'moderate' - Filters explicit content (e.g., images and videos), but allows adult domains in search results. 'strict' - Drops all adult content from search results. The default value is 'moderate'. | moderate |
| spellcheck | No | Whether to spellcheck the provided query. | |
| search_lang | No | Search language preference. The 2 or more character language code for which the search results are provided. | en |
| result_filter | No | Result filter (default ['web', 'query']) | |
| extra_snippets | No | A snippet is an excerpt from a page you get as a result of the query, and extra_snippets allow you to get up to 5 additional, alternative excerpts. Only available under Free AI, Base AI, Pro AI, Base Data, Pro Data and Custom plans. | |
| text_decorations | No | Whether display strings (e.g. result snippets) should include decoration markers (e.g. highlighting characters). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With only openWorldHint in annotations, the description carries significant weight. It adds valuable behavioral context: the return format (JSON list with title/description/URL) and the conditional behavior where empty results_filter may include FAQ/News/Video results. It does not contradict the openWorldHint annotation and appropriately discloses the comprehensive nature of the search.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is excellently structured with a one-sentence purpose summary, a bulleted 'When to use' list for scannability, and precise notes on return values and parameter behavior. Every sentence serves a distinct purpose without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (15 parameters) and absence of an output schema, the description adequately covers the tool's purpose, usage contexts, and basic return structure. The 100% schema parameter coverage compensates for the lack of detailed output schema documentation, though additional detail on the structure of specialized result types (news, videos) would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds specific semantic value by explaining the behavior of the 'results_filter' parameter when empty versus its default state, clarifying that an empty filter yields additional result types (FAQ, Discussions, etc.) beyond the default web+query results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Performs web searches using the Brave Search API,' identifying the specific verb, resource, and underlying service. It distinguishes itself from siblings (brave_news_search, brave_video_search, etc.) by positioning itself as the comprehensive, general-purpose search that can return diverse result types including web pages, FAQs, discussions, and videos.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'When to use' section listing specific scenarios (general information, location-based queries, news, videos/FAQs). However, it does not explicitly name sibling alternatives (e.g., 'use brave_news_search for dedicated news filtering') or provide negative constraints (when not to use this tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.