Provides tools for interacting with Confluence, enabling users to search for content, read page text, manage page hierarchies, and create or update documentation and metadata.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Ops MCP ServerWhy did the ttdcustom_processing DAG fail today?"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Ops Tools MCP Server
One chat window to talk to all your infrastructure — Airflow, EMR, S3, Confluence, and Azure DevOps.
No more jumping between 5 different UIs. Just ask what you want in plain English.
What Is This?
It's an MCP (Model Context Protocol) server that gives AI assistants (like Gemini CLI) access to 44 tools across your entire ops stack. You talk to it in natural language, and it calls the right APIs for you.
Example:
You: "Which DAGs failed today in prod?" AI: calls the Airflow API, gets all runs, filters failures, shows you a summary with diagnosis commands
Quick Start
1. Install Dependencies
pip install -r requirements.txt2. Set Up Your .env File
Copy the example and fill in your values:
cp .env.example .env3. Run the Server
python -m mcp_server.mainThe server runs on stdio — connect it to Gemini CLI, VS Code, or any MCP client.
4. Connect from Gemini CLI
Add this to your MCP config (server.json):
{
"mcpServers": {
"ops-tools": {
"command": "python",
"args": ["-m", "mcp_server.main"],
"cwd": "D:\\MCP"
}
}
}Environment Setup
This server works across 4 AWS accounts (dev, uat, test, prod). Each is a separate AWS account with its own credentials.
AWS Profiles (via gimme-aws-creds)
AWS_REGION=eu-west-2
AWS_PROFILE_DEV=consumersync-dev
AWS_PROFILE_UAT=consumersync-uat
AWS_PROFILE_TEST=consumersync-test
AWS_PROFILE_PROD=consumersync-prodMWAA Environments (Airflow)
MWAA_ENV_DEV=eec-aws-uk-ms-dev-consumersyncenv-mwaa
MWAA_ENV_UAT=eec-aws-uk-ms-uat-consumersync-mwaa
MWAA_ENV_TEST=eec-aws-uk-ms-tst-consumersync-mwaa
MWAA_ENV_PROD=eec-aws-uk-ms-prod-consumersync-mwaaEMR Log Buckets
EMR_LOG_BUCKET_DEV=eec-aws-uk-ms-consumersync-dev-logs-bucket
EMR_LOG_BUCKET_UAT=eec-aws-uk-ms-consumersync-uat-logs-bucket
EMR_LOG_BUCKET_TEST=eec-aws-uk-ms-consumersync-tst-logs-bucket
EMR_LOG_BUCKET_PROD=eec-aws-uk-ms-consumersync-prod-logs-bucket
EMR_LOG_PREFIX=spark-logsConfluence
CONFLUENCE_BASE_URL=https://pages.experian.local
CONFLUENCE_PAT=your-personal-access-token
CONFLUENCE_SPACE_KEY=ACTIVATEAzure DevOps (TFS)
AZDO_BASE_URL=https://ukfhpapcvt02.uk.experian.local/tfs/DefaultCollection
AZDO_PAT=your-personal-access-token
AZDO_PROJECT=Activate
AZDO_TEAM=Activate TeamImportant: The AI will always ask you "Which environment?" before calling any AWS tool. It never defaults silently — this prevents accidental cross-account mistakes.
All 44 Tools
Airflow / MWAA (11 tools)
Everything you need to monitor, debug, and manage your DAGs.
Tool | What It Does |
| Lists all DAGs with their schedule and pause status |
| Shows runs for today/yesterday/any date — numbered list so you can pick one |
| Full task-level breakdown for a specific run — which tasks passed, which failed |
| Reads the Airflow log for a specific task attempt — the raw log output |
| Manually kicks off a DAG run (with optional config) |
| Pauses a DAG so it won't run on schedule (already-running jobs finish) |
| Unpauses a DAG so scheduled runs resume |
| Retries a failed task without re-running the entire DAG |
| Shows the DAG's Python source code, tasks, operators, and dependencies |
| Full dashboard of ALL DAGs — states, schedules, failures, everything at a glance |
| Analytics: success rate, duration trends, failure patterns, visual streaks |
Common things you'd say:
"Show me all DAGs in dev"
"Which DAGs failed today in prod?"
"How has hem_processing been running lately?"
"Trigger ttdcustom_processing in uat"
"Pause digital_taxonomy in prod"
"Retry the initialise task on yesterday's failed run"
EMR Serverless (10 tools)
Manage Spark jobs, read driver logs, browse S3 log files, track costs.
Tool | What It Does |
| Lists all EMR Serverless apps (note: DAGs create temporary apps that get cleaned up) |
| Shows job runs for an application — with state and duration |
| Deep dive into a job: Spark config, resource usage, S3 log paths |
| Reads stdout/stderr from the Spark driver — the actual Python output and errors |
| Navigates the S3 log directory structure folder by folder |
| Cancels a running or stuck Spark job |
| Stops an EMR app — auto-cancels running jobs if needed |
| Permanently deletes an EMR app — force mode stops and deletes in one call |
| Reads any file from S3 (CSV, TXT, JSON, Parquet) — 5 MB limit, auto-detects format |
| Shows vCPU hours, memory, storage usage — broken down per app |
Common things you'd say:
"Show me the Spark driver log for this job"
"What failed in the stdout log?"
"Cancel that stuck job"
"Stop that EMR application"
"Force-stop the app and cancel all running jobs"
"Delete that EMR application"
"How much has EMR cost us this week?"
"Read this S3 file: s3://bucket/path/to/file.csv"
S3 — General (4 tools)
Browse any S3 bucket in the account — not just EMR logs.
Tool | What It Does |
| Lists all S3 buckets in the AWS account |
| Interactive folder/file browsing — like a file explorer for S3 |
| Recursively lists ALL files end-to-end with filters and size summary |
| Shows file metadata (size, modified date, content type, encryption) without downloading |
Common things you'd say:
"What S3 buckets do we have in dev?"
"Show me what's in the raw data bucket"
"List all CSV files in the raw bucket"
"How much data is in this S3 folder?"
"Read this parquet file from S3"
"How big is this file?"
Confluence (9 tools)
Search, read, and write documentation — without opening a browser.
Tool | What It Does |
| Full-text search across pages — ranked by relevance (same as the web UI) |
| Reads a page's full content — converted from HTML to clean text |
| Lists all child pages under a parent page |
| Lists all pages in a space (paginated) |
| Lists file attachments on a page (name, size, download URL) |
| Shows tags/labels on a page |
| Reads comments and discussions on a page |
| Creates a new page (plain text or HTML content) |
| Updates an existing page — replace or append content |
Common things you'd say:
"Find documentation about Audience Engine"
"Read that runbook page"
"Create a new troubleshooting guide under the runbooks section"
"What are the child pages under the HEM documentation?"
Pro tip: When you say "docs", "documentation", "wiki", or "runbook", the AI knows to search Confluence automatically.
Azure DevOps / TFS (8 tools)
Sprint tracking, work items, source code — all from chat.
Tool | What It Does |
| Lists all Git repositories in the project |
| Browse files and folders in a repo — one folder at a time |
| Full recursive file tree of a repo in one call — shows every file with correct paths |
| Read the content of any file (with syntax highlighting) |
| Shows active sprint name, dates, and days remaining |
| All PBIs, Tasks, and Bugs in the sprint — who's doing what |
| Full details for a PBI/Task/Bug: description, acceptance criteria, links |
| Items not in the current sprint — what's coming next |
Common things you'd say:
"What sprint are we in?"
"What's everyone working on?"
"Show me PBI 12345"
"What's in the backlog?"
"Show me all the files in the hem_processing repo"
"List all Python files in this repo"
"What's the folder structure of this repo?"
Orchestration (1 tool)
The power tool — chains multiple tools together for one-shot answers.
Tool | What It Does |
| Complete failure diagnosis in one call — finds the failed run, reads task logs, extracts EMR IDs, reads Spark driver logs, returns root cause analysis |
What you'd say:
"Diagnose the failure for hem_processing in prod"
"What went wrong with ttdcustom_processing yesterday?"
This one tool replaces 5-6 manual steps that used to take 20 minutes.
Utility (1 tool)
Tool | What It Does |
| Confirms the server is running and connected |
How It Works
You (plain English) → AI (Gemini/Claude) → MCP Server → APIs (Airflow, EMR, S3, Confluence, TFS)You type a question in natural language
The AI figures out which tool(s) to call
The MCP server calls the actual APIs (MWAA, boto3, Confluence REST, Azure DevOps REST)
Results come back formatted and readable
The AI can chain tools together — e.g. find a failed run → read its logs → show root cause
Architecture
D:\MCP\
├── mcp_server/
│ ├── main.py # Server entry point + tool registration
│ ├── config.py # Environment config (4 AWS accounts)
│ └── tools/
│ ├── _aws_helpers.py # Shared AWS helpers (S3 client, formatting)
│ ├── mwaa_tools.py # 11 Airflow tools
│ ├── emr_tools.py # 10 EMR Serverless tools
│ ├── s3_tools.py # 4 general S3 tools
│ ├── confluence_tools.py # 9 Confluence tools
│ ├── azdo_tools.py # 8 Azure DevOps tools
│ ├── orchestration_tools.py # 1 orchestration tool
│ └── utility_tools.py # 1 utility tool
├── .env # Your local config (not committed)
├── .env.example # Template for .env
├── server.json # MCP client config
├── requirements.txt # Python dependencies
├── DEMO_SCRIPT.md # 15-minute demo walkthrough
└── README.md # This fileKey Design Decisions
Fresh credentials every call — No client caching for S3 or EMR. Every API call gets a fresh boto3 session so expired credentials never cause silent failures.
Environment-aware — All AWS tools require you to specify dev/uat/test/prod. The AI asks if you forget. Each env points to a different AWS account.
MWAA session cache — The Airflow login token is cached (it needs auth cookies), but the cache clears automatically on 401/403 errors and retries.
Clean log output — Spark driver logs are auto-decompressed from .gz, Confluence HTML is converted to clean markdown text.
Interactive responses — DAG runs are numbered so you can say "tell me about run #3". Work items show ready-to-use follow-up commands.
Troubleshooting
Problem | Fix |
"Cannot connect to MWAA webserver" | Connect to VPN first |
"Access denied" on S3 | Run |
"CONFLUENCE_PAT not set" | Add your Confluence Personal Access Token to |
"AZDO_PAT not set" | Generate a PAT in Azure DevOps → User Settings → Personal Access Tokens |
AI calls all environments at once | The server instructions should prevent this — if it happens, say "just dev" |
Stale Airflow session | The server auto-retries on 401/403 — if it persists, restart the server |
Requirements
Python 3.10+
VPN access (for MWAA, Confluence, Azure DevOps)
gimme-aws-credsconfigured for all 4 AWS accountsConfluence PAT
Azure DevOps PAT
MCP-compatible client (Gemini CLI, VS Code, Claude Code, etc.)
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.