Skip to main content
Glama
Teradata

Teradata MCP Server

Official
by Teradata

base_columnMetadata

Retrieve column metadata from Teradata tables and views: data types, character sets, precision, scale, and format strings. Supports large-scale databases with parallel workers, filtering, and pagination via payload/time budgets.

Instructions

Retrieves detailed column metadata for Teradata tables, views, and functions. Returns data types, character sets, case specificity, precision, scale, and format strings for each column.

Resolution paths: Tables (T, O, Q) — DBC.ColumnsVX + DBC.IndicesVX. No HELP COLUMN. Views (V) — HELP COLUMN with derived-table wrapper, the only reliable mechanism for resolving view column types.

Uses the native TeradataConnection cursor pattern, consistent with all other tools in this module.

Use this tool instead of base_columnDescription when you need:

  • Exact Teradata type codes and their SQL type string equivalents

  • Character set information (LATIN, UNICODE, etc.)

  • Decimal precision and scale

  • Detection of broken/invalid views

  • Column-level metadata for all objects in a database at once

LARGE-SCALE USAGE GUIDANCE:

When retrieving metadata for many objects (e.g. all views in DBC), both the response payload and the execution time can exceed limits. Use these strategies to control both:

  1. FILTER FIELDS: Pass only the columns you need via the fields parameter. View rows via HELP COLUMN return ~49 fields by default; table rows via DBC.ColumnsVX return fewer. Trimming to 6-8 fields can reduce payload by 80%+. Three computed fields (ColumnTypeString, IndexTypeString, CharSetString) are always included automatically. Example: fields='ColumnName,ColumnType,ColumnLength,CharType, UpperCase,Nullable,Indexed?,Primary?,Unique?'

  2. EXCLUDE OBJECTS: Use exclude_objects to skip objects you do not need. Accepts SQL LIKE patterns (% wildcard) as a CSV. Applied before any metadata queries, so excluded objects consume zero time and zero payload. Example: exclude_objects='ResUsage%,%ResUsage%,Res%View'

  3. INCREASE PARALLELISM: Set max_workers to 12-16 for large databases. Each worker gets its own Teradata session via conn.cursor(). Default is 8.

  4. FILTER BY KIND: Use table_kind to limit to just the object types you need (e.g. 'V' for views only, 'T' for tables only).

  5. PAYLOAD BUDGET: Use max_payload_kb (default 900) to set the maximum response payload size in kilobytes. When the accumulated result data approaches this limit, the tool stops collecting and returns what it has, plus a remaining_objects CSV in metadata listing the unprocessed objects. Pass that CSV straight into object_name on the next call for automatic continuation. This self-adapts to object sizes: small-column views fit more per call, large-column views page earlier.

  6. TIME BUDGET: Use max_execution_seconds (default 180) to set the maximum wall-clock execution time. The tool monitors elapsed time as each object completes, and self-interrupts BEFORE the MCP transport timeout (typically 240s) kills the session without returning any data. When the time budget is reached, the tool returns all data collected so far plus remaining_objects for continuation — exactly the same pattern as payload budget. This is the key difference from an MCP timeout: a timeout returns NOTHING; a time budget returns EVERYTHING collected so far, plus a continuation token.

CONTINUATION PATTERN (automatic pagination): # Call 1 — starts processing, time or payload budget fills up result1 = base_columnMetadata(db_name='DBC', table_kind='V', ...) # metadata contains: remaining_objects='ViewX,ViewY,...'

# Call 2 — pass remaining_objects as object_name
result2 = base_columnMetadata(
    db_name='DBC',
    object_name='ViewX,ViewY,...',  # from result1 metadata
    ...
)
# Repeat until metadata has no remaining_objects key.

Typical call for a large database: base_columnMetadata( db_name='DBC', table_kind='V', exclude_objects='ResUsage%,%ResUsage%', fields='ColumnName,ColumnType,ColumnLength,CharType, UpperCase,Nullable,Indexed?,Primary?,Unique?', max_workers=16, max_payload_kb=900, max_execution_seconds=180 )

Arguments: conn - TeradataConnection (injected by MCP server) db_name - Name of the Teradata database to inspect object_name - Optional: specific object name, or a CSV of names. Also used for continuation: pass the remaining_objects value from a previous truncated call to resume. If omitted, all objects matching table_kind are processed. table_kind - Optional: CSV of TableKind codes to filter by. Examples: 'V' (views only), 'T,O' (tables + NoPI), 'T,V' (tables and views). Defaults to all qualifying object types (T, O, V, Q). Tables (T, O, Q) use DBC.ColumnsVX + DBC.IndicesVX. Views (V) use HELP COLUMN with a derived-table wrapper to force type resolution — this is the only reliable mechanism for view column types. Stored procedures (P, E), functions (A, F, R, B, S), and macros (M) are not supported. DBC.ColumnsVX does return parameter rows for these object types, but their parameter semantics (IN/OUT/INOUT, SPParameterType) are incompatible with the column metadata model this tool produces. Support is a planned future enhancement. max_workers - Optional: number of parallel threads for view resolution via HELP COLUMN. Default: 8. Table metadata is retrieved via DBC.ColumnsVX and DBC.IndicesVX within the same worker pool. fields - Optional: CSV of field names to include in the response. Reduces payload size significantly. Computed fields (ObjectName, ColumnTypeString, IndexTypeString, CharSetString) always included. exclude_objects - Optional: CSV of object name patterns to exclude. Uses SQL LIKE-style % wildcards. Applied before any database calls — excluded objects incur zero query cost. max_payload_kb - Optional: maximum response payload budget in KB. Default: 900. Set to 0 to disable. max_execution_seconds - Optional: maximum wall-clock execution time in seconds. Default: 180. Set to 0 to disable. *args - Positional bind parameters (reserved) **kwargs - Named bind parameters (reserved)

Returns: MCP-compliant response via create_response() containing a list of column metadata records with normalised keys and four computed string fields per column:

    ColumnTypeString      - Human-readable SQL type (e.g. "VARCHAR(200)
                            UNICODE", "DECIMAL(18,2)", "INTEGER")
    IndexTypeString       - Index classification: 'UPI', 'NUPI', 'USI',
                            'NUSI', or None if not indexed.
                            For tables (T, O, Q): sourced from
                            DBC.IndicesVX — composite index grouping
                            (IndexNumber + ColumnPosition) is fully
                            preserved.
                            For views (V): sourced from HELP COLUMN
                            flags — reports column participation only,
                            not composite index grouping. Query
                            DBC.IndicesVX against the base table for
                            full composite index detail.
    CharSetString         - Character set name: 'LATIN', 'UNICODE',
                            'KANJI1', 'GRAPHIC', 'KANJISJIS', or None.
    CaseSpecificityString - Case attribute: 'UPPERCASE', 'CASESPECIFIC',
                            'NOT CASESPECIFIC', or None if no explicit
                            case attribute is defined on the column.

When truncated, metadata will include:
    remaining_objects  - CSV of unprocessed object names
    truncated          - True
    truncation_reason  - 'time_budget_exceeded' or
                         'payload_budget_exceeded'
    elapsed_seconds    - Wall-clock time consumed (always present)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
db_nameYes
object_nameNo
table_kindNo
max_workersNo
fieldsNo
exclude_objectsNo
max_payload_kbNo
max_execution_secondsNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full behavioral disclosure burden. It fully describes resolution paths for different object types, two budget mechanisms (time and payload), self-interruption, continuation tokens, and automatic inclusion of computed fields. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-structured with headings, bullet points, and code blocks, making it navigable. It is front-loaded with purpose and resolution paths. Slight verbosity due to thoroughness, but every section earns its place given the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, but the description fully details the return format, computed fields, and metadata when truncated. It covers all necessary context: parameters, usage patterns, performance considerations, and continuation, leaving no gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% coverage (no property descriptions), so the description must compensate. It does so comprehensively: for all 8 parameters (plus reserved args/kwargs), it explains purpose, defaults, usage, and examples, far exceeding the minimal schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves column metadata for Teradata tables, views, and functions, listing specific attributes like data types and character sets. It explicitly distinguishes from the sibling tool base_columnDescription by detailing when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive usage guidance including large-scale strategies, continuation patterns, and explicit exclusions for unsupported object types (stored procedures, functions). It gives concrete examples and tells when to prefer this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Teradata/teradata-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server