This server acts as an MCP bridge that enables AI models to query and retrieve data from various cloud services, APIs, and infrastructure through Steampipe's SQL interface.
Core Capabilities:
Execute SQL queries against Steampipe's unified data layer using the
run_steampipe_querytoolAccess configured data sources including cloud providers (AWS, Azure, GCP), SaaS platforms (GitHub, Slack, Jira, Salesforce), and APIs
Retrieve structured data as JSON for AI processing and analysis
Perform analytics and reporting with aggregations, joins, and complex queries across different sources
Integration with MCP-compatible AI assistants like Claude for conversational infrastructure access
Example Use Cases:
Query GitHub repositories, issues, and pull requests using SQL syntax
Investigate security configurations and compliance across multi-cloud environments
Automate data discovery through natural language requests
Generate reports on resource usage, costs, or cloud infrastructure
Provides access to GitHub repository data through Steampipe, allowing queries for repository information such as names and fork counts.
Steampipe MCP
This is a simple steampipe MCP server. This acts as a bridge between your AI model and Steampipe tool.
Pre-requisites
Python 3.10+ installed.
uv installed (my fav) and mcp[cli]
Steampipe installed and working.
Steampipe plugin configured (e.g., github) with necessary credentials (e.g., token in ~/.steampipe/config/github.spc).
Any LLM supporting MCP. I am using Claude Here.
Node.js and npx installed (required for the MCP Inspector and potentially for running some MCP servers).
Related MCP server: Strapi MCP Server
Running MCP Interceptor
This is an awesome tool for testing your if your MCP server is working as expected
Running the Interceptor
npx -y @modelcontextprotocol/inspector uv --directory . run steampipe_mcp_server.pyA browser window should open with the MCP Inspector UI (usually at http://localhost:XXXX).
Wait for the "Connected" status on the left panel.
Go to the Tools tab.
You should see the run_steampipe_query tool listed with its description.
Click on the tool name.
In the "Arguments" JSON input field, enter a valid Steampipe query:
execute and view the json results
Running the tool
Pretty straightforward. Just run the interceptor and make sure the tool is working from the directory. Then add the server configuration to the respective LLM and select the tool from the LLM.
TroubleShooting
If the tool is not found in the interceptor then that means @mcp.tool() decorator has some issue.
Execution error - Look at the "Result" in the Inspector and the server logs (stderr) in your terminal. Did Steampipe run? Was there a SQL error? A timeout? A JSON parsing error? Adjust the Python script accordingly.
Security Risk Claude blindly executes your sql query in this POC so there is possibility to generate and execute arbitary SQL Queries via Steampipe using your configured credentials.