Skip to main content
Glama

Vectara MCP server

Official
by vectara

search_vectara

Execute a semantic search query using Vectara to retrieve contextually relevant results without generation. Provide a query, corpus keys, and API key to access matching search results from specified corpora.

Instructions

Run a semantic search query using Vectara, without generation. Args: query: str, The user query to run - required. corpus_keys: list[str], List of Vectara corpus keys to use for the search - required. Please ask the user to provide one or more corpus keys. api_key: str, The Vectara API key - required. n_sentences_before: int, Number of sentences before the answer to include in the context - optional, default is 2. n_sentences_after: int, Number of sentences after the answer to include in the context - optional, default is 2. lexical_interpolation: float, The amount of lexical interpolation to use - optional, default is 0.005. Returns: The response from Vectara, including the matching search results.

Input Schema

NameRequiredDescriptionDefault
api_keyNo
corpus_keysNo
lexical_interpolationNo
n_sentences_afterNo
n_sentences_beforeNo
queryYes

Input Schema (JSON Schema)

{ "properties": { "api_key": { "default": "", "title": "Api Key", "type": "string" }, "corpus_keys": { "default": [], "items": { "type": "string" }, "title": "Corpus Keys", "type": "array" }, "lexical_interpolation": { "default": 0.005, "title": "Lexical Interpolation", "type": "number" }, "n_sentences_after": { "default": 2, "title": "N Sentences After", "type": "integer" }, "n_sentences_before": { "default": 2, "title": "N Sentences Before", "type": "integer" }, "query": { "title": "Query", "type": "string" } }, "required": [ "query" ], "title": "search_vectaraArguments", "type": "object" }

Implementation Reference

  • The @mcp.tool()-decorated async function search_vectara implements the core tool logic: parameter validation, payload construction for semantic search (no generation), API call to Vectara query endpoint, and error handling with progress reporting.
    @mcp.tool() async def search_vectara( query: str, ctx: Context, corpus_keys: list[str], n_sentences_before: int = 2, n_sentences_after: int = 2, lexical_interpolation: float = 0.005 ) -> dict: """ Run a semantic search query using Vectara, without generation. Args: query: str, The user query to run - required. corpus_keys: list[str], List of Vectara corpus keys to use for the search - required. Please ask the user to provide one or more corpus keys. n_sentences_before: int, Number of sentences before the answer to include in the context - optional, default is 2. n_sentences_after: int, Number of sentences after the answer to include in the context - optional, default is 2. lexical_interpolation: float, The amount of lexical interpolation to use - optional, default is 0.005. Note: API key must be configured first using 'setup_vectara_api_key' tool Returns: dict: Raw search results from Vectara API containing: - "search_results": List of search result objects with scores, text, and metadata - Additional response metadata from the API On error, returns dict with "error" key containing error message. """ # Validate parameters validation_error = _validate_common_parameters(query, corpus_keys) if validation_error: return {"error": validation_error} if ctx: ctx.info(f"Running Vectara semantic search query: {query}") try: payload = _build_query_payload( query=query, corpus_keys=corpus_keys, n_sentences_before=n_sentences_before, n_sentences_after=n_sentences_after, lexical_interpolation=lexical_interpolation, enable_generation=False ) result = await _call_vectara_query(payload, ctx) return result except Exception as e: return {"error": _format_error("Vectara semantic search query", e)}
  • The @mcp.tool() decorator registers the search_vectara function as an MCP tool with the FastMCP server instance.
    @mcp.tool()
  • Function signature defines the input schema (parameters with types and defaults) and output type (dict) for the search_vectara tool, with detailed docstring description.
    async def search_vectara( query: str, ctx: Context, corpus_keys: list[str], n_sentences_before: int = 2, n_sentences_after: int = 2, lexical_interpolation: float = 0.005 ) -> dict:
  • Helper function _build_query_payload constructs the API payload for search_vectara, configuring search parameters, reranker, and conditionally generation settings.
    def _build_query_payload( query: str, corpus_keys: list[str], n_sentences_before: int = 2, n_sentences_after: int = 2, lexical_interpolation: float = 0.005, max_used_search_results: int = 10, generation_preset_name: str = "vectara-summary-table-md-query-ext-jan-2025-gpt-4o", response_language: str = "eng", enable_generation: bool = True ) -> dict: """Build the query payload for Vectara API""" payload = { "query": query, "search": { "limit": 100, "corpora": [ { "corpus_key": corpus_key, "lexical_interpolation": lexical_interpolation } for corpus_key in corpus_keys ], "context_configuration": { "sentences_before": n_sentences_before, "sentences_after": n_sentences_after }, "reranker": { "type": "customer_reranker", "reranker_name": "Rerank_Multilingual_v1", "limit": 100, "cutoff": 0.2 } }, "save_history": True, } if enable_generation: payload["generation"] = { "generation_preset_name": generation_preset_name, "max_used_search_results": max_used_search_results, "response_language": response_language, "citations": { "style": "markdown", "url_pattern": "{doc.url}", "text_pattern": "{doc.title}" }, "enable_factual_consistency_score": True } return payload
  • Helper function _call_vectara_query makes the HTTP POST to Vectara's /query endpoint using shared request logic.
    async def _call_vectara_query( payload: dict, ctx: Context = None, api_key_override: str = None ) -> dict: """Make API call to Vectara query endpoint""" return await _make_api_request( f"{VECTARA_BASE_URL}/query", payload, ctx, api_key_override, "query" )

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vectara/vectara-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server