Skip to main content
Glama

tavily_search

Search the internet for financial information to support research and analysis. Retrieve relevant data for stock, fund, and market investigations using targeted queries.

Instructions

Use search keywords to retrieve relevant information from the internet.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYessearch keyword

Implementation Reference

  • The async_execute method that performs the Tavily web search, handles caching, optional content extraction with char limits, and outputs JSON results.
    async def async_execute(self): """Execute the Tavily web search for the given query. The query is read from ``input_dict['query']`` and the result is either the raw Tavily search output or a post-processed mapping with optional extracted content, depending on ``enable_extract``. """ query: str = self.input_dict["query"] logger.info(f"tavily.query: {query}") if self.enable_cache: cached_result = self.cache.load(query) if cached_result: self.set_output(json.dumps(cached_result, ensure_ascii=False, indent=2)) return response = await self.client.search(query=query) logger.info(f"tavily.response: {response}") if not self.enable_extract: # 如果不需要 extract,直接返回 search 的结果 if not response.get("results"): raise RuntimeError("tavily return empty result") final_result = {item["url"]: item for item in response["results"]} if self.enable_cache and final_result: self.cache.save(query, final_result, expire_hours=self.cache_expire_hours) self.set_output(json.dumps(final_result, ensure_ascii=False, indent=2)) return # enable_extract=True 时的原有逻辑 url_info_dict = {item["url"]: item for item in response["results"]} response_extract = await self.client.extract(urls=[item["url"] for item in response["results"]]) logger.info(f"tavily.response_extract: {response_extract}") final_result = {} all_char_count = 0 for item in response_extract["results"]: url = item["url"] raw_content: str = item["raw_content"] if len(raw_content) > self.item_max_char_count: raw_content = raw_content[: self.item_max_char_count] if all_char_count + len(raw_content) > self.all_max_char_count: raw_content = raw_content[: self.all_max_char_count - all_char_count] if raw_content: final_result[url] = url_info_dict[url] final_result[url]["raw_content"] = raw_content all_char_count += len(raw_content) if not final_result: raise RuntimeError("tavily return empty result") if self.enable_cache and final_result: self.cache.save(query, final_result, expire_hours=self.cache_expire_hours) self.set_output(json.dumps(final_result, ensure_ascii=False, indent=2))
  • Defines the input schema and description for the tavily_search tool: requires a 'query' string.
    def build_tool_call(self) -> ToolCall: """Build the tool call schema for the Tavily web search tool.""" return ToolCall( **{ "description": "Use search keywords to retrieve relevant information from the internet.", "input_schema": { "query": { "type": "string", "description": "search keyword", "required": True, }, }, }, )
  • Registers the TavilySearchOp class (tool named 'tavily_search') as an MCP tool operation via @C.register_op() decorator.
    @C.register_op() class TavilySearchOp(BaseAsyncToolOp): """Asynchronous web search operation backed by the Tavily API.""" file_path: str = __file__
  • Property for lazy-loading the Tavily AsyncTavilyClient using TAVILY_API_KEY env var.
    @property def client(self): """Get or create the Tavily async client instance. Returns: AsyncTavilyClient: The Tavily async client instance. """ if self._client is None: from tavily import AsyncTavilyClient self._client = AsyncTavilyClient(api_key=os.environ.get("TAVILY_API_KEY", "")) return self._client

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server