Skip to main content
Glama

Teradata MCP Server

Official
by Teradata

rag_executeWorkflow_ivsm

Execute a RAG workflow to process user questions by generating context-aware answers from document embeddings using IVSM functions. Facilitates semantic search and top-k result retrieval for precise responses.

Instructions

Execute complete RAG workflow to answer user questions based on document context.

This function handles the entire RAG pipeline using IVSM functions:

  1. Configuration setup (using configurable values from rag_config.yml)
  2. Store user query (with /rag prefix stripping)
  3. Tokenize query using ivsm.tokenizer_encode
  4. Create embedding view using ivsm.IVSM_score
  5. Convert embeddings to vector columns using ivsm.vector_to_columns
  6. Perform semantic search against chunk embeddings

The function uses configuration values from rag_config.yml with fallback defaults.

Arguments: question - user question to process k - number of top-k results to return (optional, uses config default if not provided

Returns: Returns the top-k most relevant chunks with metadata for context-grounded answer generation.

Input Schema

NameRequiredDescriptionDefault
kNo
questionYes

Input Schema (JSON Schema)

{ "properties": { "k": { "default": null, "title": "K", "type": "integer" }, "question": { "title": "Question", "type": "string" } }, "required": [ "question" ], "title": "handle_rag_executeWorkflow_ivsmArguments", "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Teradata/teradata-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server