Skip to main content
Glama
bpamiri

SQL Server MCP

by bpamiri

read_rows

Retrieve specific rows from SQL Server tables using primary keys or custom filters to access targeted data for analysis or processing.

Instructions

Read rows from a table by primary key or filter.

Provide one of: id (single row), ids (multiple rows), or filter (WHERE clause).

Args:
    table: Table name (can include schema: 'dbo.Users' or 'Users')
    id: Single primary key value (for composite keys, use filter)
    ids: List of primary key values
    filter: WHERE clause without 'WHERE' keyword (e.g., "status = 'active'")
    columns: List of columns to return (default: all columns)
    max_rows: Maximum rows to return

Returns:
    Dictionary with:
    - table: Full table name
    - rows: List of row dictionaries
    - count: Number of rows returned

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tableYes
idNo
idsNo
filterNo
columnsNo
max_rowsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main execution logic for the 'read_rows' tool. Decorated with @mcp.tool(), it handles reading rows from MSSQL tables using ID, IDs, filter, or all rows with limits and column selection. Uses helpers for PK columns and table parsing.
    def read_rows(
        table: str,
        id: Any | None = None,
        ids: list[Any] | None = None,
        filter: str | None = None,
        columns: list[str] | None = None,
        max_rows: int | None = None,
    ) -> dict[str, Any]:
        """Read rows from a table by primary key or filter.
    
        Provide one of: id (single row), ids (multiple rows), or filter (WHERE clause).
    
        Args:
            table: Table name (can include schema: 'dbo.Users' or 'Users')
            id: Single primary key value (for composite keys, use filter)
            ids: List of primary key values
            filter: WHERE clause without 'WHERE' keyword (e.g., "status = 'active'")
            columns: List of columns to return (default: all columns)
            max_rows: Maximum rows to return
    
        Returns:
            Dictionary with:
            - table: Full table name
            - rows: List of row dictionaries
            - count: Number of rows returned
        """
        try:
            manager = get_connection_manager()
            config = manager.config
            schema, table_name = parse_table_name(table)
    
            # Determine columns
            col_list = ", ".join(columns) if columns else "*"
    
            # Determine effective row limit
            effective_max_rows = min(max_rows or config.max_rows, config.max_rows)
    
            # Build query
            params: list[Any] = []
    
            if id is not None:
                # Single row by primary key
                pk_cols = _get_primary_key_columns(schema, table_name)
                if not pk_cols:
                    return {"error": f"No primary key found for table {schema}.{table_name}"}
                query = (
                    f"SELECT TOP {effective_max_rows} {col_list} "
                    f"FROM [{schema}].[{table_name}] WHERE [{pk_cols[0]}] = %s"
                )
                params = [id]
    
            elif ids is not None:
                # Multiple rows by primary keys
                pk_cols = _get_primary_key_columns(schema, table_name)
                if not pk_cols:
                    return {"error": f"No primary key found for table {schema}.{table_name}"}
                placeholders = ", ".join(["%s"] * len(ids))
                query = (
                    f"SELECT TOP {effective_max_rows} {col_list} "
                    f"FROM [{schema}].[{table_name}] WHERE [{pk_cols[0]}] IN ({placeholders})"
                )
                params = list(ids)
    
            elif filter is not None:
                # Custom filter
                # Note: filter params not supported - use execute_query for complex cases
                query = (
                    f"SELECT TOP {effective_max_rows} {col_list} "
                    f"FROM [{schema}].[{table_name}] WHERE {filter}"
                )
    
            else:
                # All rows (with limit)
                query = (
                    f"SELECT TOP {effective_max_rows} {col_list} "
                    f"FROM [{schema}].[{table_name}]"
                )
    
            rows = manager.execute_query(query, tuple(params) if params else None)
    
            return {
                "table": f"{schema}.{table_name}",
                "rows": rows,
                "count": len(rows),
            }
    
        except Exception as e:
            logger.error(f"Error reading rows from {table}: {e}")
            return {"error": str(e)}
  • Supporting function to retrieve primary key column names from INFORMATION_SCHEMA, used by read_rows for ID-based queries.
    def _get_primary_key_columns(schema: str, table: str) -> list[str]:
        """Get primary key column(s) for a table.
    
        Args:
            schema: Schema name
            table: Table name
    
        Returns:
            List of primary key column names
        """
        manager = get_connection_manager()
    
        query = """
            SELECT c.COLUMN_NAME
            FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
            JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
                ON tc.CONSTRAINT_NAME = c.CONSTRAINT_NAME
                AND tc.TABLE_SCHEMA = c.TABLE_SCHEMA
                AND tc.TABLE_NAME = c.TABLE_NAME
            WHERE tc.TABLE_SCHEMA = %s
                AND tc.TABLE_NAME = %s
                AND tc.CONSTRAINT_TYPE = 'PRIMARY KEY'
            ORDER BY c.ORDINAL_POSITION
        """
        rows = manager.execute_query(query, (schema, table))
        return [row["COLUMN_NAME"] for row in rows]
  • Import statement that loads the crud module, triggering registration of all @mcp.tool() decorated functions including read_rows.
    from .tools import crud, databases, export, query, stored_procs, tables  # noqa: E402, F401
  • Import in tools __init__.py that ensures crud tools are registered when the tools package is imported.
    from . import crud, databases, export, query, stored_procs, tables
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses key behaviors: it's a read operation (implied by 'read'), supports multiple query methods, returns a dictionary structure with table name, rows, and count, and includes a max_rows limit for result control. It doesn't mention permissions, rate limits, or error handling, but covers core functionality well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: the first sentence states the purpose, followed by parameter guidance, a detailed Args section, and a Returns section. Every sentence adds value—no fluff. It efficiently covers complex functionality in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, read operation), no annotations, and an output schema provided, the description is complete. It explains all parameters thoroughly, details the return structure, and provides usage examples. The output schema likely defines the return dictionary, so the description doesn't need to duplicate that, focusing instead on practical guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully. It does so excellently: it explains each parameter's purpose (e.g., 'table: Table name', 'filter: WHERE clause without WHERE keyword'), provides examples ('dbo.Users', "status = 'active'"), clarifies defaults ('columns: default: all columns'), and notes constraints ('for composite keys, use filter'). This adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read rows from a table by primary key or filter.' It specifies the verb ('read'), resource ('rows from a table'), and mechanism ('by primary key or filter'), distinguishing it from siblings like execute_query (general queries) or describe_table (metadata).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: 'Provide one of: id (single row), ids (multiple rows), or filter (WHERE clause).' This helps the agent choose between parameters. However, it doesn't explicitly contrast with alternatives like execute_query for complex queries or list_tables for metadata, leaving some sibling differentiation implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bpamiri/mssql-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server