Teradata MCP Server
OfficialServer Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| DATABASE_URI | Yes | Teradata database connection URI in the format: teradata://username:password@host:1025/username | |
| MCP_TRANSPORT | No | Transport protocol for MCP communication |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
| logging | {} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| extensions | {
"io.modelcontextprotocol/ui": {}
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| graph_analyseDatabaseA | Composite graph analysis — runs findRootObjects, connectedComponents, detectCycles, and bfsLevels in a single MCP call with ONE shared edge fetch. This tool eliminates the scalability bottleneck of serial MCP round- trips by combining four graph analyses that would otherwise require four separate tool calls, each independently fetching the same edge set from Teradata. Performance vs individual tools:
Use this for:
Arguments: container_pattern - str: CSV LIKE patterns for container scope. Supports wildcards (%) and CSV format. Examples: '%SALES%', '%SALES%,%FINANCE%', 'PROD_%' exclude_objects - str: CSV LIKE patterns to exclude. Default: '' (no exclusions) top_n_roots - int: Number of top root objects (by downstream dependent count) to include in BFS wave analysis. Default: 4 max_depth_down - int: Maximum downstream BFS hops from roots. Default: 10 max_depth_up - int: Maximum upstream BFS hops from roots. 0 = skip upstream analysis. Default: 0 edge_repository - str: Edge repository view/table conforming to the Graph Edge Contract (Src_Container_Name, Src_Object_Name, Src_Kind, Tgt_Container_Name, Tgt_Object_Name, Tgt_Kind columns). Call graph_edgeContractDDL to generate one. Required parameter — no default. Returns: ResponseType: single response containing all four analyses: { "root_objects": { "objects": [...], "summary": {...} }, "components": { "node_details": [...], "summaries": [...], "stats": [...] }, "cycles": { "details": [...], "summaries": [...], "stats": [...] }, "bfs_waves": { "nodes": [...], "cycle_candidates": [...], "summary": {...} }, "edge_stats": { "total_edges": N, "fetch_time_ms": N } } Example calls: Full analysis of Sales and Finance databaseshandle_graph_analyseDatabase( conn=connection, container_pattern="%SALES%,%FINANCE%", edge_repository="MY_LINEAGE_DB.EdgeRepository" ) Single database family with top 8 rootshandle_graph_analyseDatabase( conn=connection, container_pattern="%FINANCE%", top_n_roots=8, edge_repository="MY_LINEAGE_DB.EdgeRepository" ) Exclude sandbox schemashandle_graph_analyseDatabase( conn=connection, container_pattern="PROD_%,STAGE_%", exclude_objects="SANDBOX%,%.temp_%", edge_repository="MY_LINEAGE_DB.EdgeRepository" ) |
| graph_bfsLevelsA | Compute BFS shortest-path hop distances from one or more root nodes. Pure-Python implementation — no stored procedure required. WHEN TO USE THIS TOOL vs graph_traceLineage:Use graph_bfsLevels when asked to:
Do NOT use graph_bfsLevels for general lineage tracing, impact path analysis, or questions about which specific objects depend on which. Use graph_traceLineage for those — it returns the full edge set with relationship detail. graph_bfsLevels returns distances and wave groupings, not dependency paths or edge detail. KEY DISTINCTION — root_node_list accepts EXACT FQ names only (no wildcards). Use graph_findRootObjects first to identify the seed objects, then pass their exact FQ names here. Arguments: root_node_list - str: CSV of exact fully-qualified root node names. No wildcards — exact names only. max_depth_up - int: Maximum upstream hops to traverse. 0 = skip upstream analysis entirely. Default: 10 max_depth_down - int: Maximum downstream hops to traverse. 0 = skip downstream analysis entirely. Default: 10 exclude_objects - str: CSV of FQ object name LIKE patterns to exclude. Matched against both Src and Tgt sides of every edge. Python fnmatch is used for pattern matching (% → *). Example: 'DFJ%,C_D02%,%.temp_%' Default: '' (no exclusions) include_containers - str: CSV of container name LIKE patterns to include. Only edges where BOTH Src and Tgt containers match at least one pattern are traversed. Python fnmatch used for matching (% → *). Empty = all containers included. Example: 'DEV01_StGeo%,MF_STGEO%,TABLEAU%,POWERBI%' Default: '' (all containers) edge_repository - str: Edge repository view/table conforming to the Required parameter — no default. Returns: ResponseType: formatted response with BFS node results + metadata. Schema is identical to handle_graph_bfsLevels (SP-based tool). Response structure: { "nodes": [ { "node": "DEV01_StGeo_STD_T.mortgage_account", "container_name": "DEV01_StGeo_STD_T", "object_name": "mortgage_account", "object_kind": "Table", "upstream_level": None, // None (NULL) if unreachable or skipped "downstream_level": 0, // 0 for root, positive for consumers "nearest_root": "DEV01_StGeo_STD_T.mortgage_account", "direction": "ROOT", // ROOT / U / D / BOTH "is_root": "Y" }, ... ], "cycle_candidates": [...], // direction='BOTH' nodes with unequal // absolute upstream/downstream levels "summary": { "total_nodes": 46, "root_nodes": 3, "upstream_only": 12, "downstream_only": 28, "both_directions": 3, "cycle_candidates": 1, "max_upstream_depth": 4, "max_downstream_depth": 5, "nodes_per_nearest_root": {"DB.Root1": 20, "DB.Root2": 26}, "object_kind_counts": {"Table": 10, "View": 22, "Macro": 8, ...} } } direction values: ROOT - One of the input root nodes U - Reachable upstream only (negative upstream_level) D - Reachable downstream only (positive downstream_level) BOTH - Reachable in both directions — possible cycle member. Unequal absolute levels indicate a back-edge (cycle). Equal absolute levels indicate a shared dependency. Technical Implementation Notes:
|
| graph_connectedComponentsA | Identify all Weakly Connected Components (WCC) in the dependency graph. Pure-Python implementation — no stored procedure required. Issues a single SQL SELECT to fetch the scoped edge set, then performs Union-Find WCC partitioning entirely in the MCP server process. A connected component is a maximal set of nodes where every node can reach every other node when edge direction is ignored. This partitions the graph into isolated sub-graphs. Use this tool for:
Arguments: container_pattern - str: CSV LIKE patterns for container scope. Supports wildcards (%) and CSV format. Examples: '%WBC%', '%WBC%,%StGeo%', 'DEV01_%,DEV02_%' exclude_objects - str: CSV LIKE patterns to exclude. Matches against container name (or DB.Object if the pattern contains a dot). Default: '' (no exclusions) edge_repository - str: Edge repository view/table conforming to the Graph Edge Contract (Src_Container_Name, Src_Object_Name, Src_Kind, Tgt_Container_Name, Tgt_Object_Name, Tgt_Kind columns). For AI-Native Data Products use: '{ProductName}_Semantic.lineage_graph' Call graph_edgeContractDDL to generate a new one. Required — no default. Returns: ResponseType: formatted response with connected component results. Response structure: { "node_details": [...], // One row per node with Component_Id "component_summaries": [...], // One row per component "summary_stats": [...] // Single aggregate row } node_details row fields: Node_FQ, DatabaseName, ObjectName, Component_Id, Object_Kind component_summaries row fields: Component_Id, Node_Count, Node_List summary_stats row fields: Component_Count, Node_Count, Edge_Count, Largest_Component, Smallest_Component, Singleton_Count, Summary_Message |
| graph_detectCyclesA | Detect circular dependencies (cycles) in the dependency graph. Pure-Python implementation — no stored procedure required. Issues a single SQL SELECT to fetch the scoped edge set, then performs WCC partitioning followed by iterative DFS cycle detection entirely in the MCP server process. Use this tool for:
Arguments: container_pattern - str: CSV LIKE patterns for container scope. Supports wildcards (%) and CSV format. Examples: 'DFJ%' — single database family '%WBC%,%StGeo%' — multiple families 'DEV01_%,DEV02_%' — multiple prefixes exclude_objects - str: CSV LIKE patterns to exclude from the scan. Matches against container name (or DB.Object if the pattern contains a dot). Default: '' (no exclusions) edge_repository - str: Edge repository view/table conforming to the Graph Edge Contract (Src_Container_Name, Src_Object_Name, Src_Kind, Tgt_Container_Name, Tgt_Object_Name, Tgt_Kind columns). For AI-Native Data Products use: '{ProductName}_Semantic.lineage_graph' Call graph_edgeContractDDL to generate a new one. Required — no default. Returns: ResponseType: formatted response with cycle detection results. Response structure: { "cycle_details": [...], // One row per node per cycle "cycle_summaries": [...], // One row per cycle with path string "summary_stats": [...] // Single aggregate row } cycle_details row fields: Cycle_Id, Cycle_Pos, Node_FQ, Cycle_Length, Component_Id cycle_summaries row fields: Cycle_Id, Cycle_Length, Component_Id, Cycle_Path summary_stats row fields: Cycle_Count, Total_Nodes_In_Cycles, Components_With_Cycles, Edge_Count, Components_Scanned, Summary_Message |
| graph_edgeContractDDLA | Generate DDL for a Graph Edge Contract-conforming table or view. This tool does NOT require a database connection — it generates DDL text from templates. No SQL is executed. The conn parameter is accepted for ModuleLoader calling convention compatibility but is not used. Required columns in the generated schema (6): Src_Container_Name, Src_Object_Name, Src_Kind, Tgt_Container_Name, Tgt_Object_Name, Tgt_Kind Optional enrichment columns (2): Edge_Relationship — nature of the edge (ETL_INPUT, ETL_OUTPUT, DIRECT…) Transformation_Type — process category (ETL, FEATURE_ENG, AGGREGATION…) These are ignored by graph analysis tools but useful for visualisation. AI-Native Data Product shortcut: If you are working within an AI-Native Data Product, the view {ProductName}Semantic.lineage_graph (Observability Module v1.5) already conforms to this contract. You do not need to generate DDL — pass that view's fully-qualified name directly as edge_repository on any graph* tool. Example: edge_repository='StGeoMortgage_Semantic.lineage_graph' Arguments: conn: TeradataConnection (unused — accepted for ModuleLoader compatibility). target_database: Database in which to create the edge repository. For AI-Native Data Products this is typically {ProductName}_Semantic. Example: 'StGeoMortgage_Semantic' object_name: Name for the edge table/view. Default: 'EdgeRepository' output_type: 'TABLE' or 'VIEW'. TABLE: generates CREATE TABLE DDL + separate sample DML. Includes all 6 required + 2 optional columns. VIEW: generates a CREATE VIEW template for mapping an existing lineage source to all 8 contract columns. Default: 'TABLE' Returns: list[dict]: Response payload containing: - ddl: DDL script (CREATE TABLE/VIEW + COMMENTs) - sample_dml: Sample INSERT statements + validation query (TABLE only; absent for VIEW) - output_type: 'TABLE' or 'VIEW' - contract_version: Contract version string |
| graph_findRootObjectsA | Find root objects (objects with no upstream dependencies) in specified containers. Root objects are ideal starting points for downstream impact analysis as they represent the foundational data sources that nothing else depends upon. Use this for:
Arguments: container_pattern - str: Database/schema pattern(s) to search. SUPPORTS WILDCARDS (%) and CSV. exclude_objects - str: Comma-separated list of patterns to exclude (SERVER-SIDE filter). Matches against DatabaseName.ObjectName format. edge_repository - str: Edge repository table/view conforming to the Required parameter — no default. object_types - str: Comma-separated list of object types to include (optional filter). Examples: 'T' (tables), 'V' (views), 'P' (procedures), 'M' (macros) Multiple: 'T,V' (tables and views only) Empty = all object types included Default: '' (all types) return_format - str: Output format: 'detailed' or 'summary' 'detailed' (default): Full object list with metadata 'summary': High-level statistics and counts only Default: 'detailed' Returns: ResponseType: formatted response with root objects + metadata Example queries that trigger this tool:
Example calls: Find root objects in WBC and StGeo databaseshandle_graph_findRootObjects( conn=connection, container_pattern="%WBC%,%StGeo%" ) Find only root tables (no views/procedures)handle_graph_findRootObjects( conn=connection, container_pattern="DEV01_%", object_types="T" ) Find root objects excluding production and temporary objectshandle_graph_findRootObjects( conn=connection, container_pattern="%WBC%,%StGeo%", exclude_objects="PRD_%,%.temp_%,%.bak_%" ) Quick summary of root objectshandle_graph_findRootObjects( conn=connection, container_pattern="DEV01_StGeo_STD_T", return_format="summary" ) Technical Implementation:
|
| graph_traceLineageA | Analyse object dependencies in Teradata. Supports wildcards (%) and CSV patterns. Hybrid implementation — no stored procedure required. Python constructs Teradata recursive CTEs that execute entirely server-side. Only the reachable subgraph crosses the network — not the full edge table. Examples: 'DB.Table' (single), '%WBC%.%' (wildcard), 'DB.T1,DB.T2' (CSV) Finds upstream dependencies (what the object depends on) and downstream dependents (what depends on the object). Returns nodes and edges representing the dependency subgraph. When multiple patterns are provided via CSV, one upstream CTE and one downstream CTE is executed per pattern. Results are merged and deduplicated by Python before assembly. Use this for:
Arguments: object_name - str: Object name pattern(s). Supports wildcards (%) and CSV format. STRING type — not an array. max_depth_up - int: Maximum levels to traverse upstream (0-10). 0 = no upstream analysis. Default: 3 max_depth_down - int: Maximum levels to traverse downstream (0-10). 0 = no downstream analysis. Default: 3 exclude_objects - str: CSV LIKE patterns to exclude. Matches against DB.Object format. Example: 'PRD_%,%.temp_%' Default: '' (no exclusions) include_containers - str: CSV of container LIKE patterns to include (whitelist). Empty = all containers. Default: '' (all containers) edge_repository - str: Edge repository view/table conforming to the Required parameter — no default. return_format - str: 'detailed' (default), 'summary', or 'edges_only' Returns: ResponseType: formatted response with dependency analysis results. detailed response structure: { "nodes": [...], // Unique nodes (deduplicated) "upstream_edges": [...], // One row per upstream edge "downstream_edges":[...], // One row per downstream edge "summary": {...} // Aggregate statistics } Edge row fields: DependentObjectDBName, DependentObjectName, FQDependentObjectName, ReferencedObjectDBName, ReferencedObjectName, FQReferencedObjectName, Src_Kind, Tgt_Kind, Depth, DependencyPath |
| plot_line_chartC | Function to generate a line plot for labels and columns. Columns mentioned in labels are used for x-axis and columns are used for y-axis. PARAMETERS: table_name: Required Argument. Specifies the name of the table to generate the donut plot. Types: str RETURNS: dict |
| plot_pie_chartC | Function to generate a pie chart plot for labels and columns. Columns mentioned in labels are used as labels and column is used to plot. PARAMETERS: table_name: Required Argument. Specifies the name of the table to generate the donut plot. Types: str RETURNS: dict |
| plot_polar_chartD | Function to generate a polar area plot for labels and columns. Columns mentioned in labels are used as labels and column is used to plot. PARAMETERS: table_name: Required Argument. Specifies the name of the table to generate the donut plot. Types: str RETURNS: dict |
| plot_radar_chartC | Function to generate a radar plot for labels and columns. Columns mentioned in labels are used as labels and column is used to plot. PARAMETERS: table_name: Required Argument. Specifies the name of the table to generate the donut plot. Types: str RETURNS: dict |
| rag_Execute_WorkflowA | Execute complete RAG workflow to answer user questions based on document context. This tool handles the entire RAG pipeline in a single step when a user query is tagged with /rag. WORKFLOW STEPS (executed automatically):
CONFIGURATION VALUES (from rag_config.yml):
TECHNICAL DETAILS:
CRITICAL ANSWERING RULES:
LANGUAGE RESTRICTIONS:
BEHAVIORAL REQUIREMENTS:
EXECUTION: Run completely silently - user only sees their question and the final context-grounded answer. |
| sql_Analyze_Cluster_StatsA | ANALYZE SQL QUERY CLUSTER PERFORMANCE STATISTICS This tool analyzes pre-computed cluster statistics to identify optimization opportunities without re-running the clustering pipeline. Perfect for iterative analysis and decision-making on which query clusters to focus optimization efforts. ANALYSIS CAPABILITIES:
AVAILABLE SORTING METRICS:
PERFORMANCE CATEGORIZATION: Automatically categorizes clusters using configurable thresholds (from sql_opt_config.yml):
TYPICAL ANALYSIS WORKFLOW:
OPTIMIZATION DECISION FRAMEWORK:
OUTPUT FORMAT: Returns detailed cluster statistics with performance rankings, categories, and metadata for LLM analysis and optimization recommendations. |
| sql_Execute_Full_PipelineA | COMPLETE SQL QUERY CLUSTERING PIPELINE FOR HIGH-USAGE QUERY OPTIMIZATION This tool executes the entire SQL query clustering workflow to identify and analyze high CPU usage queries for optimization opportunities. It's designed for database performance analysts and DBAs who need to systematically identify query optimization candidates. FULL PIPELINE WORKFLOW:
PERFORMANCE METRICS EXPLAINED:
CONFIGURATION (from sql_opt_config.yml):
OPTIMIZATION WORKFLOW: After running this tool, use:
USE CASES:
PREREQUISITES:
|
| sql_Retrieve_Cluster_QueriesA | RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization. DETAILED ANALYSIS CAPABILITIES:
QUERY SELECTION STRATEGIES:
AVAILABLE METRICS FOR SORTING:
AUTOMATIC PERFORMANCE CATEGORIZATION: Each query is categorized using configurable thresholds (from sql_opt_config.yml):
Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe TYPICAL OPTIMIZATION WORKFLOW:
QUERY LIMIT STRATEGY:
OUTPUT INCLUDES:
|
| base_columnMetadataA | Retrieves detailed column metadata for Teradata tables, views, and functions. Returns data types, character sets, case specificity, precision, scale, and format strings for each column. Resolution paths: Tables (T, O, Q) — DBC.ColumnsVX + DBC.IndicesVX. No HELP COLUMN. Views (V) — HELP COLUMN with derived-table wrapper, the only reliable mechanism for resolving view column types. Uses the native TeradataConnection cursor pattern, consistent with all other tools in this module. Use this tool instead of base_columnDescription when you need:
LARGE-SCALE USAGE GUIDANCE:When retrieving metadata for many objects (e.g. all views in DBC), both the response payload and the execution time can exceed limits. Use these strategies to control both:
CONTINUATION PATTERN (automatic pagination): # Call 1 — starts processing, time or payload budget fills up result1 = base_columnMetadata(db_name='DBC', table_kind='V', ...) # metadata contains: remaining_objects='ViewX,ViewY,...' Typical call for a large database: base_columnMetadata( db_name='DBC', table_kind='V', exclude_objects='ResUsage%,%ResUsage%', fields='ColumnName,ColumnType,ColumnLength,CharType, UpperCase,Nullable,Indexed?,Primary?,Unique?', max_workers=16, max_payload_kb=900, max_execution_seconds=180 ) Arguments:
conn - TeradataConnection (injected by MCP server)
db_name - Name of the Teradata database to inspect
object_name - Optional: specific object name, or a CSV of
names. Also used for continuation: pass the
Returns: MCP-compliant response via create_response() containing a list of column metadata records with normalised keys and four computed string fields per column: |
| base_readQueryB | Execute a SQL query via SQLAlchemy, bind parameters if provided (prepared SQL), and return the fully rendered SQL (with literals) in metadata. Arguments: sql - SQL text, with optional bind-parameter placeholders persist - Set to True to persist the results as a table and reuse it later. Recommended for large result sets. Returns: ResponseType: formatted response with query results + metadata (includes 'volatile_table' field in metadata if persist=True) |
| base_saveDDLA | Extracts the complete DDL of a Teradata object and saves it to a .sql file. This tool solves the token limit problem by executing the extraction and file save operation directly on the server side, without needing to pass large DDL content through the response. Arguments: database_name - Database name (e.g., 'MKTG_USR') object_name - Object name (e.g., 'SP_LOAD_VARIABLES_ARGUMENTARIO_IAG_FICHA_CLIENTE') object_type - Type of object: 'PROCEDURE', 'TABLE', 'VIEW' (default: 'PROCEDURE') output_dir - Directory where to save the DDL file (default: './ddls_extracted') Returns: ResponseType: formatted response with file path, size, and metadata |
| sec_userDbPermissionsB | Get permissions for a user across all databases. Arguments: user_name - User name to analyze. persist - If True, materializes result as a volatile table and returns table name |
| sec_rolePermissionsB | Get permissions for a role. Arguments: role_name - Role name to analyze. persist - If True, materializes result as a volatile table and returns table name |
| sec_userRolesA | Get roles assigned to a user. Arguments: user_name - User name to analyze. persist - If True, materializes result as a volatile table and returns table name |
| qlty_missingValuesA | Get the column names that have missing values in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_negativeValuesA | Get the column names that have negative values in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_distinctCategoriesA | Get the distinct categories from a column in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze column_name - Column name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_standardDeviationB | Get the mean and standard deviation for a column in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze column_name - Column name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_columnSummaryC | Get column summary statistics for all columns in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_univariateStatisticsB | Get full univariate statistics for a column in a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze column_name - Column name to analyze persist - If True, materializes result as a volatile table and returns table name |
| qlty_rowsWithMissingValuesA | Get the rows that have missing values in a specific column of a table. Arguments: database_name - Name of the database (optional, omit if table_name is fully qualified) table_name - Table name to analyze column_name - Column name to analyze for missing values persist - If True, materializes result as a volatile table and returns table name |
| base_databaseListB | Lists databases in the Teradata System. Arguments: scope - Filter scope: 'user' returns only user-created databases (excludes system databases), 'all' returns every database. persist - If True, materializes result as a volatile table and returns table name |
| base_tableListB | Lists all tables and views in a database. Arguments: database_name - Database name. Leave empty for all databases. persist - If True, materializes result as a volatile table and returns table name |
| base_tableDDLB | Displays the DDL definition of a table. Arguments: table_name - Table name database_name - Database name persist - If True, materializes result as a volatile table and returns table name |
| base_columnDescriptionA | Shows detailed column information about a database table or view. Arguments: database_name - Database name. Defaults to '%' (all databases). obj_name - Table or view name. Defaults to '%' (all tables). persist - If True, materializes result as a volatile table and returns table name |
| base_tablePreviewA | Returns a data sample (top 5 rows) and inferred structure from a database table or view. Arguments: table_name - Table or view name database_name - Database name persist - If True, materializes result as a volatile table and returns table name |
| base_tableAffinityB | Get tables commonly used together by database users, helpful to infer relationships between tables. Arguments: database_name - Database name table_name - Table or view name persist - If True, materializes result as a volatile table and returns table name |
| base_tableUsageA | Measure the usage of tables and views by users in a schema, helpful to infer what database objects are most actively used. Arguments: database_name - Database name. Leave empty for all databases. persist - If True, materializes result as a volatile table and returns table name |
| dba_tableSpaceC | Get table space used across all tables, or filtered by database and/or table name. Arguments: database_name - Database name filter. Leave empty for all databases. table_name - Table name filter. Leave empty for all tables. top_n - Limit results to top N largest tables by space. Set to 0 for no limit (default: 0). exclude_system - Exclude system databases and tables. Set to 'Y' to exclude, 'N' to include all (default: 'N'). persist - If True, materializes result as a volatile table and returns table name |
| dba_tableSqlListC | Get a list of SQL run against a table in the last number of days. Arguments: table_name - Table name to search for no_days - Number of days to look back persist - If True, materializes result as a volatile table and returns table name |
| dba_userSqlListA | Get SQL run by a user in the last number of days. Leave user_name empty for all users. Arguments: user_name - User name filter. Leave empty or omit for all users. no_days - Number of days to look back persist - If True, materializes result as a volatile table and returns table name |
| dba_databaseSpaceA | Get database space allocation for a specific database or all databases. Arguments: database_name - Database name. Leave empty or omit for all databases. persist - If True, materializes result as a volatile table and returns table name |
| dba_tableUsageImpactA | Measure the usage of tables and views by users to understand what users and tables are driving most resource usage. Arguments: database_name - Database name to analyze. Leave empty for all databases. user_name - User name to analyze. Leave empty for all users. persist - If True, materializes result as a volatile table and returns table name |
| dba_resusageSummaryB | Get summary of resource usage (CPU, IO, Memory) for a specified date range, broken down by specified dimensions. Arguments: user_name - User name to filter by. Leave empty for all users. LogDate - Log date to filter by in YYYY-MM-DD format. Leave empty for all dates. dayOfWeek - Day of week to filter by (1=Sunday, 2=Monday, ..., 7=Saturday). Leave empty for all days. hourOfDay - Hour of day to filter by (0-23). Leave empty for all hours. workloadType - Workload type to filter by (e.g., 'Batch', 'Interactive'). Leave empty for all workload types. workloadComplexity - Workload complexity to filter by (e.g., 'Simple', 'Medium', 'Complex'). Leave empty for all complexity levels. AppID - Application ID to filter by. Leave empty for all applications. no_days - Number of days to look back from today (e.g., 7, 30, 90). persist - If True, materializes result as a volatile table and returns table name |
| dba_databaseVersionA | Get Teradata database version information. Arguments: persist - If True, materializes result as a volatile table and returns table name |
| dba_flowControlC | Get the Teradata flow control metrics for a specified date range. Arguments: start_date - The start date for the query range in YYYY-MM-DD format. end_date - The end date for the query range in YYYY-MM-DD format. persist - If True, materializes result as a volatile table and returns table name |
| dba_featureUsageA | Get the user feature usage metrics for a specified date range. Arguments: start_date - The start date for the query range in YYYY-MM-DD format. end_date - The end date for the query range in YYYY-MM-DD format. persist - If True, materializes result as a volatile table and returns table name |
| dba_userDelayB | Get the Teradata user delay metrics for a specified date range. Arguments: start_date - The start date for the query range in YYYY-MM-DD format. end_date - The end date for the query range in YYYY-MM-DD format. persist - If True, materializes result as a volatile table and returns table name |
| dba_sessionInfoB | Get the Teradata session information for user. Arguments: user_name - User name to analyze. User '*' to get all users. persist - If True, materializes result as a volatile table and returns table name |
| dba_systemSpaceB | Get the Teradata total system database space usage. Arguments: persist - If True, materializes result as a volatile table and returns table name |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| tdvs_tools_prompt | Main prompt for Teradata Enterprise Vector Store operations. |
| tdvs_rag_prompt | Prompt for Teradata Enterprise Vector Store and RAG (Retrieval Augmented Generation) based operations. |
| qlty_databaseQuality | Database data quality assessment. (prompt_type: reporting) |
| _testMyServer | Test all the MCP tools, prompts and resources. (prompt_type: reporting) |
| base_query | Help users interact with Teradata databases effectively. |
| base_tableBusinessDesc | You are a Teradata DBA who is an expert in describing the business use of tables in a database. |
| base_databaseBusinessDesc | You are a Teradata DBA who is an expert in describing the business use of databases. |
| dba_tableArchive | The following prompt is used to guide the Teradata DBA in finding opportunities for archiving data. (prompt_type: reporting) |
| dba_databaseLineage | You are a Teradata DBA who is an expert in finding the lineage of tables in a database. (prompt_type: context) |
| dba_tableDropImpact | You are a Teradata DBA who is an expert in finding the impact of dropping a table. (prompt_type: reporting) |
| dba_databaseHealthAssessment | You are a Teradata DBA who is an expert in assessing the health of a database. (prompt_type: reporting) |
| dba_userActivityAnalysis | Analyze Teradata user activity patterns for the past 7 days (prompt_type: reporting) |
| dba_systemVoice | Analyze Teradata user activity patterns for the past 7 days (prompt_type: reporting) |
| chat_ai_mapreduce | Multi-step workflow to answer a high-level question using Teradata SQL and the chat_aggregatedCompleteChat tool. The agent first builds a Teradata query, then runs aggregated chat completion, and finally synthesizes a global answer. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| get_graph_edge_contract | Return the Graph Edge Contract schema definition. |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Teradata/teradata-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server