Skip to main content
Glama
bpamiri

SQL Server MCP

by bpamiri

describe_table

Retrieve column definitions, primary keys, foreign keys, and indexes for SQL Server tables to analyze database structure and relationships.

Instructions

Get detailed column information for a table.

Retrieves column definitions, primary keys, foreign keys, and indexes.

Args:
    table: Table name, optionally with schema (e.g., 'dbo.Users' or 'Users').
           Defaults to 'dbo' schema if not specified.

Returns:
    Dictionary with:
    - table: Full table name (schema.table)
    - columns: List of column info (name, type, nullable, etc.)
    - primary_key: List of primary key column names
    - foreign_keys: List of foreign key relationships
    - indexes: List of index info

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tableYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function for the 'describe_table' MCP tool. Decorated with @mcp.tool() for automatic registration. Executes SQL queries against MSSQL INFORMATION_SCHEMA and sys tables to retrieve comprehensive table metadata including columns, primary keys, foreign keys, and indexes. Handles parsing of table name, error handling, and formats output as a structured dictionary.
    @mcp.tool()
    def describe_table(table: str) -> dict[str, Any]:
        """Get detailed column information for a table.
    
        Retrieves column definitions, primary keys, foreign keys, and indexes.
    
        Args:
            table: Table name, optionally with schema (e.g., 'dbo.Users' or 'Users').
                   Defaults to 'dbo' schema if not specified.
    
        Returns:
            Dictionary with:
            - table: Full table name (schema.table)
            - columns: List of column info (name, type, nullable, etc.)
            - primary_key: List of primary key column names
            - foreign_keys: List of foreign key relationships
            - indexes: List of index info
        """
        try:
            manager = get_connection_manager()
            schema, table_name = parse_table_name(table)
    
            # Get columns
            columns_query = """
                SELECT
                    COLUMN_NAME as [name],
                    DATA_TYPE as [type],
                    CHARACTER_MAXIMUM_LENGTH as [max_length],
                    NUMERIC_PRECISION as [precision],
                    NUMERIC_SCALE as [scale],
                    IS_NULLABLE as [nullable],
                    COLUMN_DEFAULT as [default_value]
                FROM INFORMATION_SCHEMA.COLUMNS
                WHERE TABLE_SCHEMA = %s AND TABLE_NAME = %s
                ORDER BY ORDINAL_POSITION
            """
            columns_rows = manager.execute_query(columns_query, (schema, table_name))
    
            columns = []
            for row in columns_rows:
                col_info: dict[str, Any] = {
                    "name": row["name"],
                    "type": row["type"],
                    "nullable": row["nullable"] == "YES",
                }
                if row["max_length"]:
                    col_info["max_length"] = row["max_length"]
                if row["precision"]:
                    col_info["precision"] = row["precision"]
                if row["scale"]:
                    col_info["scale"] = row["scale"]
                if row["default_value"]:
                    col_info["default"] = row["default_value"]
                columns.append(col_info)
    
            # Get primary key columns
            pk_query = """
                SELECT c.COLUMN_NAME
                FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
                JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
                    ON tc.CONSTRAINT_NAME = c.CONSTRAINT_NAME
                    AND tc.TABLE_SCHEMA = c.TABLE_SCHEMA
                    AND tc.TABLE_NAME = c.TABLE_NAME
                WHERE tc.TABLE_SCHEMA = %s
                    AND tc.TABLE_NAME = %s
                    AND tc.CONSTRAINT_TYPE = 'PRIMARY KEY'
                ORDER BY c.ORDINAL_POSITION
            """
            pk_rows = manager.execute_query(pk_query, (schema, table_name))
            primary_key = [row["COLUMN_NAME"] for row in pk_rows]
    
            # Get foreign keys
            fk_query = """
                SELECT
                    fk.name as constraint_name,
                    COL_NAME(fkc.parent_object_id, fkc.parent_column_id) as [column],
                    OBJECT_SCHEMA_NAME(fkc.referenced_object_id) as ref_schema,
                    OBJECT_NAME(fkc.referenced_object_id) as ref_table,
                    COL_NAME(fkc.referenced_object_id, fkc.referenced_column_id) as ref_column
                FROM sys.foreign_keys fk
                JOIN sys.foreign_key_columns fkc ON fk.object_id = fkc.constraint_object_id
                WHERE fk.parent_object_id = OBJECT_ID(%s)
                ORDER BY fk.name, fkc.constraint_column_id
            """
            fk_rows = manager.execute_query(fk_query, (f"{schema}.{table_name}",))
    
            foreign_keys = [
                {
                    "constraint": row["constraint_name"],
                    "column": row["column"],
                    "references_table": f"{row['ref_schema']}.{row['ref_table']}",
                    "references_column": row["ref_column"],
                }
                for row in fk_rows
            ]
    
            # Get indexes
            idx_query = """
                SELECT
                    i.name as index_name,
                    i.type_desc as [type],
                    i.is_unique,
                    i.is_primary_key,
                    STRING_AGG(c.name, ', ') WITHIN GROUP (ORDER BY ic.key_ordinal) as [columns]
                FROM sys.indexes i
                JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
                JOIN sys.columns c ON ic.object_id = c.object_id AND ic.column_id = c.column_id
                WHERE i.object_id = OBJECT_ID(%s)
                    AND i.name IS NOT NULL
                GROUP BY i.name, i.type_desc, i.is_unique, i.is_primary_key
                ORDER BY i.name
            """
            idx_rows = manager.execute_query(idx_query, (f"{schema}.{table_name}",))
    
            indexes = [
                {
                    "name": row["index_name"],
                    "type": row["type"],
                    "is_unique": bool(row["is_unique"]),
                    "is_primary_key": bool(row["is_primary_key"]),
                    "columns": row["columns"],
                }
                for row in idx_rows
            ]
    
            return {
                "table": f"{schema}.{table_name}",
                "columns": columns,
                "primary_key": primary_key,
                "foreign_keys": foreign_keys,
                "indexes": indexes,
            }
    
        except Exception as e:
            logger.error(f"Error describing table {table}: {e}")
            return {"error": str(e)}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so well by detailing the return structure (dictionary with table, columns, keys, indexes), which clarifies the tool's behavior. It also specifies the default schema ('dbo') for the table parameter, adding useful context. No contradictions are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by details in a structured format (Args and Returns sections). Every sentence adds value: the first states the action, the second elaborates on retrieved info, and the parameter/return explanations are essential for clarity without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no annotations, but with an output schema), the description is complete. It explains the parameter semantics thoroughly, details the return structure, and the output schema will handle return values, so no gaps remain for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate fully. It does by explaining the 'table' parameter's semantics: table name with optional schema, defaulting to 'dbo' if not specified, and providing examples ('dbo.Users' or 'Users'). This adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get detailed column information') and resource ('for a table'), distinguishing it from siblings like list_tables (which lists table names) or describe_stored_proc (which describes stored procedures). The verb 'retrieves' and the detailed scope (column definitions, keys, indexes) make the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying what information is retrieved (e.g., column definitions, keys), which helps differentiate it from tools like list_tables (metadata only) or execute_query (general queries). However, it does not explicitly state when not to use it or name alternatives, such as using list_tables for basic table names only.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bpamiri/mssql-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server