Skip to main content
Glama

analyze_query_results

Execute a query on Snowflake databases and analyze its results with AI to generate insights, summaries, or other data interpretations.

Instructions

Execute a query and analyze its results using AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysis_typeNosummary
queryYes
results_limitNo

Implementation Reference

  • MCP tool handler for 'analyze_query_results': executes the SQL query using Snowflake client, retrieves results, and delegates AI analysis to OpenAI client.
    @mcp.tool() async def analyze_query_results(query: str, results_limit: int = 100, analysis_type: str = "summary", ctx: Context = None) -> str: """Execute a query and analyze its results using AI""" await ctx.info(f"Analyzing query results for: {query[:100]}...") try: # Execute the query first snowflake = await get_snowflake_client() result = await snowflake.execute_query(query, results_limit) if not result.success: await ctx.error(f"Query failed: {result.error}") return f"Query execution failed: {result.error}" # Analyze the results openai = await get_openai_client() analysis = await openai.analyze_query_results(query, result.data, analysis_type) await ctx.info("Successfully analyzed query results") return analysis except Exception as e: logger.error(f"Error analyzing query results: {str(e)}") await ctx.error(f"Failed to analyze query results: {str(e)}") raise
  • Core helper method in OpenAIClient class that performs AI analysis of query results using OpenAI chat completions, generating insights based on query and sample data.
    async def analyze_query_results( self, query: str, results: List[Dict[str, Any]], analysis_type: str = "summary" ) -> str: """Analyze query results using AI""" # Limit data for analysis to avoid token limits sample_data = results[:10] if len(results) > 10 else results system_prompt = f""" You are a data analyst. Analyze the provided query results and provide insights. Analysis type: {analysis_type} Provide a clear, concise analysis with: - Key findings and patterns - Statistical insights if applicable - Recommendations or next steps - Any anomalies or interesting observations Format your response in a clear, professional manner. """ user_prompt = f""" SQL Query: {query} Results ({len(results)} rows total, showing sample): {json.dumps(sample_data, indent=2, default=str)} Please analyze these results and provide insights. """ try: response = await self.client.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ], temperature=0.3, max_tokens=1000 ) analysis = response.choices[0].message.content.strip() logger.info("Generated data analysis") return analysis except Exception as e: logger.error(f"Error analyzing data: {str(e)}") raise Exception(f"Failed to analyze data: {str(e)}")

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rickyb30/datapilot-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server