Skip to main content
Glama

brave_search_summary

Search the web using Brave Search to find and summarize information for queries, providing concise results through the MCP2Brave server.

Instructions

使用Brave搜索引擎搜索网络信息

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes

Implementation Reference

  • The primary handler function for the 'brave_search_summary' tool. It is decorated with @mcp.tool() which serves as both the handler definition and registration in FastMCP. The function takes a single string parameter 'query' and returns a string result by delegating to the internal _do_search_with_summary helper.
    @mcp.tool()
    def brave_search_summary(query: str) -> str:
        """使用Brave搜索引擎搜索网络信息"""
        return _do_search_with_summary(query)
  • The core helper function implementing the Brave Search API call, result processing, summary generation (using official summarizer or fallback to content extraction from top results), and formatting of search results.
    def _do_search_with_summary(query: str) -> str:
        """Internal function to handle the search logic with summary support"""
        try:
            query = query.encode('utf-8').decode('utf-8')
            url = "https://api.search.brave.com/res/v1/web/search"
            
            headers = {
                "Accept": "application/json",
                "X-Subscription-Token": API_KEY
            }
            
            params = {
                "q": query,
                "count": 5,
                "result_filter": "web",
                "enable_summarizer": True,
                "format": "json"
            }
            
            response = requests.get(url, headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
            
            logger.debug("API Response Structure:")
            logger.debug(f"Response Keys: {list(data.keys())}")
            
            # 处理搜索结果
            summary_text = ""
            search_results = []
            
            # 获取网页搜索结果
            if 'web' in data and 'results' in data['web']:
                results = data['web']['results']
                
                # 获取摘要
                if 'summarizer' in data:
                    logger.debug("Found official summarizer data")
                    summary = data.get('summarizer', {})
                    summary_text = summary.get('text', '')
                else:
                    logger.debug("No summarizer found, generating summary from top results")
                    # 使用前两个结果的内容作为摘要
                    try:
                        summaries = []
                        for result in results[:2]:  # 只处理前两个结果
                            url = result.get('url')
                            if url:
                                logger.debug(f"Fetching content from: {url}")
                                content = _get_url_content_direct(url)
                                # 提取HTML中的文本内容
                                raw_content = content.split('---\n\n')[-1]
                                text_content = _extract_text_from_html(raw_content)
                                if text_content:
                                    # 添加标题和来源信息
                                    title = result.get('title', 'No title')
                                    date = result.get('age', '') or result.get('published_time', '')
                                    summaries.append(f"### {title}")
                                    if date:
                                        summaries.append(f"Published: {date}")
                                    summaries.append(text_content)
                        
                        if summaries:
                            summary_text = "\n\n".join([
                                "Generated summary from top results:",
                                *summaries
                            ])
                            logger.debug("Successfully generated summary from content")
                        else:
                            summary_text = results[0].get('description', '')
                    except Exception as e:
                        logger.error(f"Error generating summary from content: {str(e)}")
                        summary_text = results[0].get('description', '')
                
                # 处理搜索结果显示
                for result in results:
                    title = result.get('title', 'No title').encode('utf-8').decode('utf-8')
                    url = result.get('url', 'No URL')
                    description = result.get('description', 'No description').encode('utf-8').decode('utf-8')
                    search_results.append(f"- {title}\n  URL: {url}\n  Description: {description}\n")
            
            # 组合输出
            output = []
            if summary_text:
                output.append(f"Summary:\n{summary_text}\n")
            if search_results:
                output.append("Search Results:\n" + "\n".join(search_results))
            
            logger.debug(f"Has summary: {bool(summary_text)}")
            logger.debug(f"Number of results: {len(search_results)}")
            
            return "\n".join(output) if output else "No results found for your query."
            
        except Exception as e:
            logger.error(f"Search error: {str(e)}")
            logger.exception("Detailed error trace:")
            return f"Error performing search: {str(e)}"
  • Helper function to extract meaningful text from HTML content, used in summary generation fallback. Removes scripts/styles, finds main content, cleans and limits text length.
    def _extract_text_from_html(html_content: str) -> str:
        """从HTML内容中提取有意义的文本"""
        try:
            from bs4 import BeautifulSoup
            soup = BeautifulSoup(html_content, 'html.parser')
            
            # 移除不需要的元素
            for element in soup(['script', 'style', 'header', 'footer', 'nav', 'aside', 'iframe', 'ad', '.advertisement']):
                element.decompose()
            
            # 优先提取文章主要内容
            article = soup.find('article')
            if article:
                content = article
            else:
                # 尝试找到主要内容区域
                content = soup.find(['main', '.content', '#content', '.post-content', '.article-content'])
                if not content:
                    content = soup
            
            # 获取文本
            text = content.get_text(separator='\n')
            
            # 文本清理
            lines = []
            for line in text.split('\n'):
                line = line.strip()
                # 跳过空行和太短的行
                if line and len(line) > 30:
                    lines.append(line)
            
            # 组合文本,限制在1000字符以内
            cleaned_text = ' '.join(lines)
            if len(cleaned_text) > 1000:
                # 尝试在句子边界截断
                end_pos = cleaned_text.rfind('. ', 0, 1000)
                if end_pos > 0:
                    cleaned_text = cleaned_text[:end_pos + 1]
                else:
                    cleaned_text = cleaned_text[:1000]
            
            return cleaned_text
            
        except Exception as e:
            logger.error(f"Error extracting text from HTML: {str(e)}")
            # 如果无法处理HTML,返回原始内容的一部分
            text = html_content.replace('<', ' <').replace('>', '> ').split()
            return ' '.join(text)[:500]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states it searches web information using Brave - it doesn't mention what kind of results are returned (summaries, links, full content), whether there are rate limits, authentication requirements, privacy implications, or any other behavioral characteristics. For a search tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single Chinese sentence that directly states the tool's function. There's no wasted language or unnecessary elaboration. It's front-loaded with the essential information: using Brave to search web information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a search tool with no annotations, no output schema, and 0% schema description coverage, the description is incomplete. It doesn't explain what the tool returns (summaries, links, structured data), how results are formatted, or any limitations. With sibling tools that seem related (like 'search_brave_with_summary'), the lack of differentiation makes the context incomplete for proper tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides no information about parameters beyond what's in the schema. With 0% schema description coverage and 1 parameter ('query'), the description doesn't add any semantic meaning - it doesn't explain what constitutes a good query, query formatting, language support, or search scope. The description must compensate for the schema's lack of parameter descriptions but fails to do so.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('搜索网络信息' - search web information) and the resource/engine ('使用Brave搜索引擎' - using Brave search engine). It doesn't specifically differentiate from sibling tools like 'search_brave_with_summary' or 'search_news', but the purpose is unambiguous. The description goes beyond just restating the name by specifying the search engine being used.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'search_brave_with_summary', 'search_news', 'search_news_info', and 'get_url_content_direct', there's no indication of when this specific Brave search tool is preferred over those alternatives. The description only states what it does, not when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mcp2everything/mcp2brave'

If you have feedback or need assistance with the MCP directory API, please join our Discord server