Skip to main content
Glama
rickyb30

DataPilot MCP Server

by rickyb30

natural_language_to_sql

Convert natural language questions about data into SQL queries using AI, enabling database interaction without SQL expertise.

Instructions

Convert natural language question to SQL query using AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
requestYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler for natural_language_to_sql. Gathers relevant table schemas from Snowflake and delegates to OpenAI client for SQL generation.
    @mcp.tool()
    async def natural_language_to_sql(request: NaturalLanguageRequest, ctx: Context) -> str:
        """Convert natural language question to SQL query using AI"""
        await ctx.info(f"Converting natural language to SQL: {request.question}")
        
        try:
            openai = await get_openai_client()
            
            # Get schema context for better SQL generation
            table_schemas = []
            if request.database or request.schema:
                snowflake = await get_snowflake_client()
                tables = await snowflake.list_tables(request.database, request.schema)
                
                for table in tables[:5]:  # Limit to first 5 tables for context
                    columns = await snowflake.describe_table(
                        table.table_name, 
                        request.database or table.database_name,
                        request.schema or table.schema_name
                    )
                    table_schemas.append({
                        'table_name': table.table_name,
                        'columns': [{'name': col.column_name, 'type': col.data_type} for col in columns]
                    })
            
            sql_query = await openai.natural_language_to_sql(request, table_schemas)
            await ctx.info("Successfully generated SQL query")
            return sql_query
            
        except Exception as e:
            logger.error(f"Error converting to SQL: {str(e)}")
            await ctx.error(f"Failed to convert to SQL: {str(e)}")
            raise
  • Pydantic BaseModel defining the input parameters for the natural_language_to_sql tool.
    class NaturalLanguageRequest(BaseModel):
        """Request for natural language to SQL conversion"""
        question: str = Field(..., description="Natural language question about the data")
        context: Optional[str] = Field(None, description="Additional context about tables/schema")
        database: Optional[str] = Field(None, description="Database to query")
        schema: Optional[str] = Field(None, description="Schema to query")
  • OpenAIClient method implementing the core NL-to-SQL logic using chat completions with schema-aware prompting.
    async def natural_language_to_sql(
        self,
        request: NaturalLanguageRequest,
        table_schemas: Optional[List[Dict[str, Any]]] = None
    ) -> str:
        """Convert natural language question to SQL query"""
        
        # Build context about available tables and schemas
        context = self._build_schema_context(table_schemas)
        
        system_prompt = f"""
        You are a Snowflake SQL expert. Convert natural language questions to SQL queries.
        
        Guidelines:
        - Use proper Snowflake SQL syntax
        - Include appropriate WHERE clauses for filters
        - Use proper JOIN syntax when needed
        - Always include LIMIT clause for safety (default 100)
        - Use uppercase for SQL keywords
        - Be precise with column names and table names
        - Handle date/time functions appropriately for Snowflake
        
        Available schema context:
        {context}
        
        Return only the SQL query without any explanation or markdown formatting.
        """
        
        user_prompt = f"""
        Question: {request.question}
        
        Additional context: {request.context or 'None'}
        Database: {request.database or 'Use current database'}
        Schema: {request.schema or 'Use current schema'}
        """
        
        try:
            response = await self.client.chat.completions.create(
                model=self.model,
                messages=[
                    {"role": "system", "content": system_prompt},
                    {"role": "user", "content": user_prompt}
                ],
                temperature=0.1,
                max_tokens=500
            )
            
            sql_query = response.choices[0].message.content.strip()
            # Remove any markdown formatting
            sql_query = sql_query.replace('```sql', '').replace('```', '').strip()
            
            logger.info(f"Generated SQL query: {sql_query}")
            return sql_query
            
        except Exception as e:
            logger.error(f"Error generating SQL: {str(e)}")
            raise Exception(f"Failed to generate SQL query: {str(e)}")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool uses AI for conversion but doesn't mention accuracy limitations, potential errors, rate limits, authentication needs, or what the output looks like (though an output schema exists). This is inadequate for an AI-powered tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's appropriately sized and front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (AI-based conversion), no annotations, and 0% schema coverage, the description is incomplete. It lacks parameter details and behavioral context, though the existence of an output schema mitigates some gaps by handling return values externally.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the parameters (question, context, database, schema) are documented in the schema. The description adds no information about these parameters beyond what's implied by the tool name, failing to compensate for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as converting natural language questions to SQL queries using AI, which is a specific verb+resource combination. However, it doesn't differentiate from siblings like 'execute_sql' or 'explain_query' that also deal with SQL queries, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'execute_sql' (which runs SQL) and 'explain_query' (which explains SQL), there's no indication of whether this tool should be used before execution or as a standalone conversion step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rickyb30/datapilot-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server