Skip to main content
Glama
OpenLinkSoftware

mcp-sqlalchemy

podbc_describe_table

Retrieve table definitions, including column names, data types, and keys, from any SQLAlchemy-accessible DBMS using the MCP server's pyodbc connectivity.

Instructions

Retrieve and return a dictionary containing the definition of a table, including column names, data types, nullable, autoincrement, primary key, and foreign keys.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
SchemaYes
tableYes
urlNo

Implementation Reference

  • Main handler function for podbc_describe_table tool. Handles connection, table existence check, and delegates to _get_table_info for details.
    @mcp.tool(
        name="podbc_describe_table",
        description="Retrieve and return a dictionary containing the definition of a table, including column names, data types, nullable,"
                    " autoincrement, primary key, and foreign keys."
    )
    def podbc_describe_table(Schema:str, table: str, user:Optional[str]=None, 
                            password:Optional[str]=None, dsn:Optional[str]=None) -> str:
        """
        Retrieve and return a dictionary containing the definition of a table, including column names, data types, nullable, autoincrement, primary key, and foreign keys.
    
        If `schema` is None, returns the table definition for the specified table in all schemas.
        If `schema` is not None, returns the table definition for the specified table in the specified schema.
    
        Args:
            schema (str): The name of the schema to retrieve the table definition for. If None, retrieves the table definition for all schemas.
            table (str): The name of the table to retrieve the definition for.
            user (Optional[str]=None): Optional username.
            password (Optional[str]=None): Optional password.
            dsn (Optional[str]=None): Optional dsn name.
    
        Returns:
            str: A dictionary containing the table definition, including column names, data types, nullable, autoincrement, primary key, and foreign keys.
        """
        cat = "%" if Schema is None else Schema
        table_definition = {}
        try:
            with get_connection(True, user, password, dsn) as conn:
                rc, tbl = _has_table(conn, cat=cat, table=table)
                if rc:
                    table_definition = _get_table_info(conn, cat=tbl.get("cat"), sch=tbl.get("sch"), table=tbl.get("name"))
    
            return json.dumps(table_definition, indent=2)
    
        except pyodbc.Error as e:
            logging.error(f"Error retrieving table definition: {e}")
            raise
  • Core helper that fetches columns, primary keys, foreign keys, assembles table definition, marks PK columns.
    def _get_table_info(conn, cat:str, sch: str, table: str) -> Dict[str, Any]:
        try:
            columns = _get_columns(conn, cat=cat, sch=sch, table=table)
            primary_keys = _get_pk_constraint(conn, cat=cat, sch=sch, table=table)['constrained_columns']
            foreign_keys = _get_foreign_keys(conn, cat=cat, sch=sch, table=table)
    
            table_info = {
                "TABLE_CAT": cat,
                "TABLE_SCHEM": sch,
                "TABLE_NAME": table,
                "columns": columns,
                "primary_keys": primary_keys,
                "foreign_keys": foreign_keys
            }
    
            for column in columns:
                column["primary_key"] = column['name'] in primary_keys
    
            return table_info
    
        except pyodbc.Error as e:
            logging.error(f"Error retrieving table info: {e}")
            raise
  • Helper to retrieve column metadata using pyodbc cursor.columns().
    def _get_columns(conn, cat: str, sch: str, table:str):
        with conn.cursor() as cursor:
            ret = []
            for row in cursor.columns(table=table, catalog=cat, schema=sch):
                ret.append({
                    "name":row[3],
                    "type":row[5],
                    "column_size": row[6],
                    # "decimal_digits":row[8],
                    "num_prec_radix":row[9],
                    "nullable":False if row[10]==0 else True,
                    "default":row[12]
                })
            return ret
  • Helper to retrieve primary key constraint info using pyodbc cursor.primaryKeys().
    def _get_pk_constraint(conn, cat: str, sch: str, table:str):
        with conn.cursor() as cursor:
            ret = None
            rs = cursor.primaryKeys(table=table, catalog=cat, schema=sch).fetchall()
            if len(rs) > 0:
                ret = { "constrained_columns": [row[3] for row in rs],
                    "name": rs[0][5]
                }
            return ret
  • Helper to retrieve foreign key constraints using pyodbc cursor.foreignKeys(), grouping by FK name.
    def _get_foreign_keys(conn, cat: str, sch: str, table:str):
        def fkey_rec():
            return {
                "name": None,
                "constrained_columns": [],
                "referred_cat": None,
                "referred_schem": None,
                "referred_table": None,
                "referred_columns": [],
                "options": {},
            }
    
        fkeys = defaultdict(fkey_rec)
        with conn.cursor() as cursor:
            rs = cursor.foreignKeys(foreignTable=table, foreignCatalog=cat, foreignSchema=sch)
            for row in rs:
                rec = fkeys[row[11]]  #.FK_NAME
                rec["name"] = row[11] #.FK_NAME
    
                c_cols = rec["constrained_columns"]
                c_cols.append(row[7]) #.FKCOLUMN_NAME)
    
                r_cols = rec["referred_columns"]
                r_cols.append(row[3]) #.PKCOLUMN_NAME)
    
                if not rec["referred_table"]:
                    rec["referred_table"] = row[2]  #.PKTABLE_NAME
                    rec["referred_schem"] = row[1] #.PKTABLE_OWNER
                    rec["referred_cat"] = row[0] #.PKTABLE_CAT
    
        return list(fkeys.values())
  • Utility to create pyodbc connection using env vars or provided creds.
    def get_connection(readonly=True, uid: Optional[str] = None, pwd: Optional[str] = None, 
                    dsn: Optional[str] = None) -> pyodbc.Connection:
        dsn = DB_DSN if dsn is None else dsn
        uid = DB_UID if uid is None else uid
        pwd = DB_PWD if pwd is None else pwd
    
        if dsn is None:
            raise ValueError("ODBC_DSN environment variable is not set.")
        if uid is None:
            raise ValueError("ODBC_USER environment variable is not set.")
        if pwd is None:
            raise ValueError("ODBC_PASSWORD environment variable is not set.")
    
        dsn_string = f"DSN={dsn};UID={uid};PWD={pwd}"
        logging.info(f"DSN:{dsn}  UID:{uid}")
        # connection_string="DSN=VOS;UID=dba;PWD=dba"
    
        return pyodbc.connect(dsn_string, autocommit=True, readonly=readonly)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the output format ('dictionary containing the definition') but lacks critical behavioral details: whether this is a read-only operation, potential performance impacts, error conditions, or authentication needs. For a database tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose. It avoids unnecessary words and directly states the action and output. However, it could be slightly more structured by separating usage context from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It adequately explains the purpose but misses parameter explanations, behavioral transparency, and usage guidelines. Given the complexity and lack of structured data, it should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so parameters are undocumented in the schema. The description mentions 'table' implicitly but doesn't explain any of the three parameters (Schema, table, url) or their semantics. It adds no value beyond what the parameter names suggest, failing to compensate for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('retrieve and return') and resource ('definition of a table'), specifying what information is included (column names, data types, etc.). It distinguishes from siblings like podbc_get_tables (which likely lists tables) by focusing on table structure details. However, it doesn't explicitly name alternatives or contrast with all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like podbc_get_tables or podbc_filter_table_names. It doesn't mention prerequisites, context for use, or exclusions. Usage is implied by the purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OpenLinkSoftware/mcp-sqlalchemy-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server