Skip to main content
Glama
FlowLLM-AI

Finance MCP

by FlowLLM-AI

tavily_search

Search the internet for financial information to support research and analysis. Retrieve relevant data for stock, fund, and market investigations using targeted queries.

Instructions

Use search keywords to retrieve relevant information from the internet.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYessearch keyword

Implementation Reference

  • The async_execute method that performs the Tavily web search, handles caching, optional content extraction with char limits, and outputs JSON results.
    async def async_execute(self):
        """Execute the Tavily web search for the given query.
    
        The query is read from ``input_dict['query']`` and the result is
        either the raw Tavily search output or a post-processed mapping
        with optional extracted content, depending on ``enable_extract``.
        """
    
        query: str = self.input_dict["query"]
        logger.info(f"tavily.query: {query}")
    
        if self.enable_cache:
            cached_result = self.cache.load(query)
            if cached_result:
                self.set_output(json.dumps(cached_result, ensure_ascii=False, indent=2))
                return
    
        response = await self.client.search(query=query)
        logger.info(f"tavily.response: {response}")
    
        if not self.enable_extract:
            # 如果不需要 extract,直接返回 search 的结果
            if not response.get("results"):
                raise RuntimeError("tavily return empty result")
    
            final_result = {item["url"]: item for item in response["results"]}
    
            if self.enable_cache and final_result:
                self.cache.save(query, final_result, expire_hours=self.cache_expire_hours)
    
            self.set_output(json.dumps(final_result, ensure_ascii=False, indent=2))
            return
    
        # enable_extract=True 时的原有逻辑
        url_info_dict = {item["url"]: item for item in response["results"]}
        response_extract = await self.client.extract(urls=[item["url"] for item in response["results"]])
        logger.info(f"tavily.response_extract: {response_extract}")
    
        final_result = {}
        all_char_count = 0
        for item in response_extract["results"]:
            url = item["url"]
            raw_content: str = item["raw_content"]
            if len(raw_content) > self.item_max_char_count:
                raw_content = raw_content[: self.item_max_char_count]
            if all_char_count + len(raw_content) > self.all_max_char_count:
                raw_content = raw_content[: self.all_max_char_count - all_char_count]
    
            if raw_content:
                final_result[url] = url_info_dict[url]
                final_result[url]["raw_content"] = raw_content
                all_char_count += len(raw_content)
    
        if not final_result:
            raise RuntimeError("tavily return empty result")
    
        if self.enable_cache and final_result:
            self.cache.save(query, final_result, expire_hours=self.cache_expire_hours)
    
        self.set_output(json.dumps(final_result, ensure_ascii=False, indent=2))
  • Defines the input schema and description for the tavily_search tool: requires a 'query' string.
    def build_tool_call(self) -> ToolCall:
        """Build the tool call schema for the Tavily web search tool."""
        return ToolCall(
            **{
                "description": "Use search keywords to retrieve relevant information from the internet.",
                "input_schema": {
                    "query": {
                        "type": "string",
                        "description": "search keyword",
                        "required": True,
                    },
                },
            },
        )
  • Registers the TavilySearchOp class (tool named 'tavily_search') as an MCP tool operation via @C.register_op() decorator.
    @C.register_op()
    class TavilySearchOp(BaseAsyncToolOp):
        """Asynchronous web search operation backed by the Tavily API."""
    
        file_path: str = __file__
  • Property for lazy-loading the Tavily AsyncTavilyClient using TAVILY_API_KEY env var.
    @property
    def client(self):
        """Get or create the Tavily async client instance.
    
        Returns:
            AsyncTavilyClient: The Tavily async client instance.
        """
        if self._client is None:
            from tavily import AsyncTavilyClient
    
            self._client = AsyncTavilyClient(api_key=os.environ.get("TAVILY_API_KEY", ""))
        return self._client
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving information but lacks details on rate limits, authentication needs, result format, pagination, or error handling. This is inadequate for a search tool that likely has operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Use search keywords to retrieve relevant information') with no wasted words. It's appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description is insufficient. It doesn't explain what kind of information is returned, how results are structured, or any limitations (e.g., number of results, sources). Given the complexity of internet search and lack of structured data, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'query' clearly documented as 'search keyword'. The description adds no additional parameter semantics beyond what the schema provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('retrieve') and resource ('relevant information from the internet'), and distinguishes it from siblings like 'crawl_url' or 'mock_search' by focusing on search functionality. However, it doesn't explicitly differentiate from 'dashscope_search' which appears to be another search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'dashscope_search' or 'mock_search', nor does it mention any prerequisites or exclusions. It only states the basic function without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server