Skip to main content
Glama
timeplus-io

mcp-timeplus

by timeplus-io

list_tables

Retrieve available tables and streams from a specified database, with optional name pattern filtering.

Instructions

List available tables/streams in the given database

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
databaseNodefault
likeNo

Implementation Reference

  • The `list_tables` function is a tool handler decorated with @mcp.tool(). It lists available tables/streams in a given database, supports optional LIKE filtering, and enriches results with table comments, column comments, and CREATE STREAM DDL.
    @mcp.tool()
    def list_tables(database: str = 'default', like: str = None):
        """List available tables/streams in the given database"""
        logger.info(f"Listing tables in database '{database}'")
        client = create_timeplus_client()
        query = f"SHOW STREAMS FROM {quote_identifier(database)}"
        if like:
            query += f" LIKE {format_query_value(like)}"
        result = client.command(query)
    
        # Get all table comments in one query
        table_comments_query = f"SELECT name, comment FROM system.tables WHERE database = {format_query_value(database)}"
        table_comments_result = client.query(table_comments_query)
        table_comments = {row[0]: row[1] for row in table_comments_result.result_rows}
    
        # Get all column comments in one query
        column_comments_query = f"SELECT table, name, comment FROM system.columns WHERE database = {format_query_value(database)}"
        column_comments_result = client.query(column_comments_query)
        column_comments = {}
        for row in column_comments_result.result_rows:
            table, col_name, comment = row
            if table not in column_comments:
                column_comments[table] = {}
            column_comments[table][col_name] = comment
    
        def get_table_info(table):
            logger.info(f"Getting schema info for table {database}.{table}")
            schema_query = f"DESCRIBE STREAM {quote_identifier(database)}.{quote_identifier(table)}"
            schema_result = client.query(schema_query)
    
            columns = []
            column_names = schema_result.column_names
            for row in schema_result.result_rows:
                column_dict = {}
                for i, col_name in enumerate(column_names):
                    column_dict[col_name] = row[i]
                # Add comment from our pre-fetched comments
                if table in column_comments and column_dict['name'] in column_comments[table]:
                    column_dict['comment'] = column_comments[table][column_dict['name']]
                else:
                    column_dict['comment'] = None
                columns.append(column_dict)
    
            create_table_query = f"SHOW CREATE STREAM {database}.`{table}`"
            create_table_result = client.command(create_table_query)
    
            return {
                "database": database,
                "name": table,
                "comment": table_comments.get(table),
                # "columns": columns, # exclude columns in the output since it's too verbose, the DDL below has enough information
                "create_table_query": create_table_result,
            }
    
        tables = []
        if isinstance(result, str):
            # Single table result
            for table in (t.strip() for t in result.split()):
                if table:
                    tables.append(get_table_info(table))
        elif isinstance(result, Sequence):
            # Multiple table results
            for table in result:
                tables.append(get_table_info(table))
    
        logger.info(f"Found {len(tables)} tables")
        return tables
  • The `list_tables` function is re-exported via __init__.py for public API access and listed in __all__.
    from .mcp_server import (
        create_timeplus_client,
        list_databases,
        list_tables,
        run_sql,
        list_kafka_topics,
        explore_kafka_topic,
        create_kafka_stream,
        generate_sql,
        connect_to_apache_iceberg,
    )
    
    __all__ = [
        "list_databases",
        "list_tables",
        "run_sql",
        "create_timeplus_client",
        "list_kafka_topics",
        "explore_kafka_topic",
        "create_kafka_stream",
        "generate_sql",
        "connect_to_apache_iceberg",
    ]
  • Tests for list_tables: test_list_tables_without_like (line 46) tests listing without filter, test_list_tables_with_like (line 53) tests with LIKE filter.
    def test_list_tables_without_like(self):
        """Test listing tables without a 'LIKE' filter."""
        result = list_tables(self.test_db)
        self.assertIsInstance(result, list)
        self.assertEqual(len(result), 1)
        self.assertEqual(result[0]["name"], self.test_table)
    
    def test_list_tables_with_like(self):
        """Test listing tables with a 'LIKE' filter."""
        result = list_tables(self.test_db, like=f"{self.test_table}%")
        self.assertIsInstance(result, list)
        self.assertEqual(len(result), 1)
        self.assertEqual(result[0]["name"], self.test_table)
  • Additional test test_table_and_column_comments (line 78) validates that table and column comments are correctly retrieved by list_tables.
    def test_table_and_column_comments(self):
        """Test that table and column comments are correctly retrieved."""
        result = list_tables(self.test_db)
        self.assertIsInstance(result, list)
        self.assertEqual(len(result), 1)
    
        table_info = result[0]
        # Verify table comment
        self.assertEqual(table_info["comment"], "Test table for unit testing")
    
        # Get columns by name for easier testing
        columns = {col["name"]: col for col in table_info["columns"]}
    
        # Verify column comments
        self.assertEqual(columns["id"]["comment"], "Primary identifier")
        self.assertEqual(columns["name"]["comment"], "User name field")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the basic action but provides no details on side effects, read-only nature, permissions needed, or performance implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the main action. It could be slightly improved by adding parameter details without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool, the description lacks essential context: no output schema, no explanation of the 'like' parameter, and no mention of error handling or return format. An agent may struggle to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description adds no meaning to the parameters (database, like). The 'like' parameter's purpose (filtering pattern) is not explained, leaving the agent to guess.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List', resource 'tables/streams', and scope 'in the given database', which distinguishes it from sibling tools like list_databases or list_kafka_topics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, when not to use it, or any prerequisites. The description lacks context for appropriate invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/timeplus-io/mcp-timeplus'

If you have feedback or need assistance with the MCP directory API, please join our Discord server