Skip to main content
Glama
Teradata

Teradata MCP Server

Official
by Teradata

graph_analyseDatabase

Run four graph analyses—findRootObjects, connectedComponents, detectCycles, and bfsLevels—in one call with shared edge fetch to avoid multiple round-trips for database migration and dependency analysis.

Instructions

Composite graph analysis — runs findRootObjects, connectedComponents, detectCycles, and bfsLevels in a single MCP call with ONE shared edge fetch.

This tool eliminates the scalability bottleneck of serial MCP round- trips by combining four graph analyses that would otherwise require four separate tool calls, each independently fetching the same edge set from Teradata.

Performance vs individual tools:

  • 1 SQL round-trip instead of 4 (shared edge fetch)

  • 1 MCP response instead of 4 (eliminates stdio serialisation overhead)

  • Same algorithmic complexity (O(V+E) BFS, O(α·N) Union-Find, O(V+E) DFS)

  • In-memory edge sharing: all analyses operate on the same Python list

Use this for:

  • Full database migration readiness assessment

  • Pre-migration cycle + root + wave analysis in one call

  • Dashboard data population (all four analyses needed simultaneously)

  • Any workflow that would otherwise call 3+ individual graph tools

Arguments: container_pattern - str: CSV LIKE patterns for container scope. Supports wildcards (%) and CSV format. Examples: '%SALES%', '%SALES%,%FINANCE%', 'PROD_%'

                  CRITICAL: STRING type, not array.
                  CORRECT: container_pattern="%SALES%,%FINANCE%"
                  WRONG:   container_pattern=["%SALES%", "%FINANCE%"]

exclude_objects - str: CSV LIKE patterns to exclude. Default: '' (no exclusions)

top_n_roots - int: Number of top root objects (by downstream dependent count) to include in BFS wave analysis. Default: 4

max_depth_down - int: Maximum downstream BFS hops from roots. Default: 10

max_depth_up - int: Maximum upstream BFS hops from roots. 0 = skip upstream analysis. Default: 0

edge_repository - str: Edge repository view/table conforming to the Graph Edge Contract (Src_Container_Name, Src_Object_Name, Src_Kind, Tgt_Container_Name, Tgt_Object_Name, Tgt_Kind columns). Call graph_edgeContractDDL to generate one. Required parameter — no default.

Returns: ResponseType: single response containing all four analyses:

{ "root_objects": { "objects": [...], "summary": {...} }, "components": { "node_details": [...], "summaries": [...], "stats": [...] }, "cycles": { "details": [...], "summaries": [...], "stats": [...] }, "bfs_waves": { "nodes": [...], "cycle_candidates": [...], "summary": {...} }, "edge_stats": { "total_edges": N, "fetch_time_ms": N } }

Example calls:

Full analysis of Sales and Finance databases

handle_graph_analyseDatabase( conn=connection, container_pattern="%SALES%,%FINANCE%", edge_repository="MY_LINEAGE_DB.EdgeRepository" )

Single database family with top 8 roots

handle_graph_analyseDatabase( conn=connection, container_pattern="%FINANCE%", top_n_roots=8, edge_repository="MY_LINEAGE_DB.EdgeRepository" )

Exclude sandbox schemas

handle_graph_analyseDatabase( conn=connection, container_pattern="PROD_%,STAGE_%", exclude_objects="SANDBOX%,%.temp_%", edge_repository="MY_LINEAGE_DB.EdgeRepository" )

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
container_patternYes
exclude_objectsNo
top_n_rootsNo
max_depth_downNo
max_depth_upNo
edge_repositoryNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It details that the tool performs four analyses, shares edge fetch, and includes performance metrics. It explains the required edge_repository parameter and references graph_edgeContractDDL. It does not explicitly state read-only behavior, but the context implies no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections for performance, use cases, argument details, return format, and examples. It is somewhat lengthy but each section adds value. Front-loading the composite nature and key benefit helps agents quickly understand the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 analyses) and lack of output schema, the description provides a comprehensive return format example and performance context. It covers all necessary aspects: what it does, when to use, parameters, and expected output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description thoroughly explains all 6 parameters, including defaults, examples, and a critical note that container_pattern must be a string not an array. Since the input schema has 0% description coverage, the description fully compensates with essential usage details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it's a composite graph analysis combining four analyses in one call with shared edge fetch. It explicitly distinguishes from individual sibling tools like graph_bfsLevels, graph_connectedComponents, etc., by highlighting the performance benefits of eliminating serial round-trips.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear use cases under 'Use this for:' (e.g., migration readiness, pre-migration analysis, dashboard data, workflows requiring 3+ individual tools). It does not explicitly mention when not to use it, but the guidance is specific and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Teradata/teradata-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server