Enables interaction with Confluence documentation through tools for searching and retrieving pages, listing spaces, and accessing page content across Confluence instances.
Offers complete Databricks Unity Catalog integration with tools for querying metadata, executing SQL queries, exploring data schemas, and managing catalogs, schemas, and tables.
Provides comprehensive GitHub integration with tools for browsing repositories, searching code, reading files, managing branches and pull requests, and exploring repository metadata.
MPO MCP Server
A comprehensive Model Context Protocol (MCP) server built with FastMCP that provides powerful integrations with GitHub repositories, Confluence documentation, and Databricks Unity Catalog.
š Built with FastMCP! This server leverages FastMCP, a modern, decorator-based framework for building MCP servers with minimal boilerplate.
š Table of Contents
Overview
MPO MCP Server enables AI assistants and LLMs to interact seamlessly with your development and data ecosystem. It exposes a comprehensive set of tools through the Model Context Protocol, allowing intelligent agents to:
GitHub: Browse repositories, search code, read files, manage branches and pull requests
Confluence: Search and retrieve documentation, list spaces and pages
Databricks: Query Unity Catalog metadata, execute SQL queries, explore data schemas
The server is built with a modular architecture, allowing you to configure only the services you need.
Features
š§ Flexible Configuration
Modular Design: Enable only the services you need (GitHub, Confluence, Databricks, or any combination)
Environment-based: Simple
.env
file configuration with validationSecure: API tokens and credentials managed through environment variables
š Multiple Usage Modes
Interactive LLM Assistant: Natural language interface with autonomous tool selection
MCP Server: Direct integration with Claude Desktop and other MCP clients
Command-Line Interface: Direct tool invocation via CLI
š Comprehensive Tool Set
18 GitHub Tools: Complete repository management and code exploration
5 Confluence Tools: Full documentation search and retrieval
10 Databricks Tools: Complete Unity Catalog metadata and SQL execution
Installation
Prerequisites
Python 3.10 or higher
pip or uv for package management
API credentials for the services you want to use
Quick Setup
Clone the repository:
Install dependencies:
Or using uv (faster):
Create configuration file:
Add your credentials to
.env
(see Configuration)
Package Installation
You can also install as a package:
This enables the command-line tools:
mpo-mcp-server
: Run the MCP servermpo
: Command-line interface
Configuration
Environment Variables
Create a .env
file in the project root with your credentials:
Getting API Credentials
Anthropic API Key (for Interactive LLM Assistant)
Visit console.anthropic.com
Sign up or log in
Navigate to API Keys
Create a new API key
Copy to
.env
file
GitHub Personal Access Token
Go to GitHub Settings ā Developer settings ā Personal access tokens ā Tokens (classic)
Generate new token with scopes:
repo
(for private repositories)read:org
(for organization data)user
(for user data)
Copy token to
.env
file
Confluence API Token
Create API token
Use your Atlassian account email as username
Copy token to
.env
file
Databricks Access Token
Go to your Databricks workspace
Click User Settings ā Developer
Manage Access tokens ā Generate new token
Set expiration and comment
Copy token to
.env
file
Service Validation
The server automatically validates configurations at startup:
Tools are only exposed for properly configured services
Partial configuration is supported (e.g., GitHub only)
Clear error messages for missing credentials
Usage
Method 1: Interactive LLM Assistant (Recommended) š¤
The easiest way to use the server - a conversational interface that autonomously selects and uses tools:
Features:
Natural language queries
Autonomous tool selection
Context-aware responses
Conversation history
Follow-up questions
Example Session:
Requirements: Set ANTHROPIC_API_KEY
in .env
See docs/GETTING_STARTED_LLM_ASSISTANT.md for detailed documentation.
Method 2: MCP Server (For Claude Desktop & Other Clients)
Run the server to expose tools via the Model Context Protocol:
Or if installed as package:
Integration with Claude Desktop
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Option 1: Using .env file (Recommended)
Option 2: Explicit environment variables
See docs/CURSOR_MCP_SETUP.md for Cursor AI integration.
Method 3: Command-Line Interface
Direct tool invocation via CLI:
See docs/CLI_GUIDE.md and docs/CLI_EXAMPLES.md for comprehensive CLI documentation.
Available Tools
GitHub Tools (6 tools)
1. github_list_repositories
List repositories for a user or organization.
Parameters:
org
(optional): Organization or username (defaults toGITHUB_ORG
)limit
(default: 30): Maximum number of repositories
Returns: List of repositories with name, description, stars, forks, language, etc.
Example:
2. github_get_repository_info
Get detailed information about a specific repository.
Parameters:
repo_name
(required): Full repository name (e.g., "nike-goal-analytics-mpo/msc-dft-monorepo")
Returns: Detailed repository metadata including stars, forks, topics, license, etc.
Example:
3. github_search_code
Search for code across GitHub repositories.
Parameters:
query
(required): Search queryrepo
(optional): Limit search to specific repositorylimit
(default: 10): Maximum results
Returns: List of code matches with file paths and URLs
Example:
4. github_get_file_contents
Read file contents from a repository.
Parameters:
repo_name
(required): Full repository namefile_path
(required): Path to fileref
(optional): Branch, tag, or commit SHA
Returns: File contents and metadata
Example:
5. github_list_branches
List branches in a repository.
Parameters:
repo_name
(required): Full repository namelimit
(default: 20): Maximum branches
Returns: List of branches with protection status and commit SHA
Example:
6. github_get_pull_requests
Retrieve pull requests for a repository.
Parameters:
repo_name
(required): Full repository namestate
(default: "open"): PR state ("open", "closed", or "all")limit
(default: 20): Maximum PRs
Returns: List of pull requests with status, author, dates, etc.
Example:
Confluence Tools (5 tools)
1. confluence_list_pages
List pages in a Confluence space.
Parameters:
space_key
(optional): Space key (defaults toCONFLUENCE_SPACE_KEY
)limit
(default: 25): Maximum pages
Returns: List of pages with titles, IDs, and URLs
Example:
2. confluence_get_page_content
Get full content of a Confluence page.
Parameters:
page_id
(required): Page ID
Returns: Page content with metadata, version info, and HTML/storage content
Example:
3. confluence_search_pages
Search for pages across Confluence.
Parameters:
query
(required): Search queryspace_key
(optional): Limit to specific spacelimit
(default: 20): Maximum results
Returns: Search results with excerpts and relevance
Example:
4. confluence_get_page_by_title
Find a page by its exact title.
Parameters:
title
(required): Page titlespace_key
(optional): Space key (defaults toCONFLUENCE_SPACE_KEY
)
Returns: Page content and metadata
Example:
5. confluence_list_spaces
List available Confluence spaces.
Parameters:
limit
(default: 25): Maximum spaces
Returns: List of spaces with keys, names, and URLs
Example:
Databricks Tools (10 tools)
1. databricks_list_catalogs
List all Unity Catalog catalogs.
Parameters: None
Returns: List of catalogs with names, owners, storage roots
Example:
2. databricks_list_schemas
List schemas in a catalog.
Parameters:
catalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: List of schemas with full names and metadata
Example:
3. databricks_list_tables
List tables in a schema.
Parameters:
schema_name
(required): Schema namecatalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: List of tables with names, types, formats, and locations
Example:
4. databricks_get_table_schema
Get detailed schema for a table.
Parameters:
table_name
(required): Table nameschema_name
(required): Schema namecatalog_name
(optional): Catalog name (defaults toDATABRICKS_CATALOG
)
Returns: Complete table schema with columns, types, and properties
Example:
5. databricks_search_tables
Search for tables by name pattern.
Parameters:
query
(required): Search query (table name pattern)catalog_name
(optional): Limit to specific catalogmax_results
(default: 50): Maximum results
Returns: List of matching tables
Example:
6. databricks_get_catalog_info
Get detailed catalog information.
Parameters:
catalog_name
(required): Catalog name
Returns: Catalog metadata including properties and configuration
Example:
7. databricks_get_schema_info
Get detailed schema information.
Parameters:
catalog_name
(required): Catalog nameschema_name
(required): Schema name
Returns: Schema metadata and properties
Example:
8. databricks_execute_query
Execute a SQL query on Databricks.
Parameters:
query
(required): SQL query to executecatalog_name
(optional): Catalog context (defaults toDATABRICKS_CATALOG
)warehouse_id
(optional): SQL warehouse ID (defaults toDATABRICKS_WAREHOUSE_ID
)
Returns: Query results with columns and data rows
Example:
9. databricks_list_warehouses
List available SQL warehouses.
Parameters: None
Returns: List of SQL warehouses with IDs, names, states, and configurations
Example:
10. databricks_list_sql_warehouses
Alias for databricks_list_warehouses
.
Documentation
Comprehensive documentation is available in the docs/
directory:
Getting Started
QUICKSTART.md - Get started in 5 minutes ā”
SETUP.md - Detailed setup guide with credential instructions š§
GETTING_STARTED_LLM_ASSISTANT.md - Interactive assistant guide š¤
Tools & CLI
TOOLS.md - Complete tool reference with examples š ļø
CLI_GUIDE.md - Command-line interface guide š»
CLI_EXAMPLES.md - CLI usage examples š”
FastMCP
FASTMCP_SUMMARY.md - FastMCP conversion summary ā
FASTMCP_QUICKSTART.md - Quick reference for FastMCP patterns š
FASTMCP_COMPARISON.md - Side-by-side comparison with traditional MCP š
FASTMCP_MIGRATION.md - Detailed migration guide š
Architecture & Concepts
MCP_EXPLAINED.md - Deep dive into how MCP works š§
FLOW_DIAGRAM.md - Visual diagrams of the complete flow š
IMPLEMENTATION_SUMMARY.md - Implementation details š
CURSOR_MCP_SETUP.md - Cursor AI integration guide šÆ
Development
Project Structure
Adding New Tools
Implement the tool in the appropriate tools file:
Register the tool in
server.py
:
Add CLI command in
cli.py
(optional):
Testing Tools
You can test individual tools programmatically:
Code Quality
Type hints: All functions use type hints
Docstrings: Comprehensive docstrings for all public methods
Error handling: Graceful error handling with informative messages
Logging: Structured logging throughout
Dependencies
Core dependencies:
fastmcp>=0.1.0
- MCP server frameworkPyGithub>=2.1.1
- GitHub API clientatlassian-python-api>=3.41.0
- Confluence API clientdatabricks-sdk>=0.18.0
- Databricks API clientpython-dotenv>=1.0.0
- Environment variable managementanthropic>=0.39.0
- Anthropic API for LLM assistant
See requirements.txt for complete list.
Troubleshooting
Server Not Starting
Issue: Server fails to start or shows import errors
Solutions:
Verify Python version:
python --version
(must be 3.10+)Reinstall dependencies:
pip install -r requirements.txt --force-reinstall
Check for conflicting packages:
pip list | grep mcp
Verify virtual environment:
which python
Tools Not Appearing
Issue: Expected tools don't show up in MCP client
Solutions:
Check configuration validation in server logs
Verify credentials in
.env
fileEnsure
.env
is in correct location (project root)Check environment variables are loaded:
python -c "from mpo_mcp.config import Config; print(Config.validate_github())"
Restart the MCP client after configuration changes
API Authentication Errors
GitHub:
Verify token has correct scopes (
repo
,read:org
)Check token hasn't expired
Test token:
curl -H "Authorization: token YOUR_TOKEN" https://api.github.com/user
Confluence:
Verify URL format (must include https://)
Check API token is valid (not password)
Ensure username is email address
Test:
curl -u email@example.com:API_TOKEN https://your-domain.atlassian.net/wiki/rest/api/space
Databricks:
Verify workspace URL is correct
Check token hasn't expired
Ensure token has appropriate permissions
Test:
curl -H "Authorization: Bearer YOUR_TOKEN" https://your-workspace.databricks.com/api/2.0/unity-catalog/catalogs
Rate Limiting
GitHub:
Authenticated requests: 5,000 requests/hour
Search API: 30 requests/minute
Use
limit
parameters to reduce API calls
Confluence:
Cloud: Rate limits vary by plan
Implement exponential backoff for production use
Databricks:
Check workspace quotas
Use connection pooling for multiple queries
Claude Desktop Integration Issues
Issue: Tools not appearing in Claude Desktop
Solutions:
Verify config file location:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
Check JSON syntax is valid
Verify
cwd
path is absolute and correctRestart Claude Desktop after config changes
Check Claude Desktop logs for errors
LLM Assistant Issues
Issue: Assistant not responding or showing errors
Solutions:
Verify
ANTHROPIC_API_KEY
is set correctlyCheck API key has sufficient credits
Ensure FastMCP server can start independently
Review error messages in console output
Connection Issues
Issue: Tools timing out or failing to connect
Solutions:
Check network connectivity
Verify firewall rules allow outbound HTTPS
Test API endpoints directly with curl
Check proxy settings if behind corporate firewall
Increase timeout values if on slow connection
Debugging Tips
Enable verbose logging:
Test configuration:
Run server with logging:
Test individual tools:
Getting Help
If you encounter issues not covered here:
Check the relevant documentation in
docs/
Review server logs for detailed error messages
Verify all credentials are correctly configured
Test API endpoints independently
Check you have appropriate permissions for each service
License
This project is provided as-is for demonstration and integration purposes.
Contributing
Contributions are welcome! Please ensure:
Code follows existing style and conventions
All functions have type hints and docstrings
New tools are properly registered
Documentation is updated accordingly
Acknowledgments
Built with:
FastMCP - Modern MCP framework
PyGithub - GitHub API wrapper
atlassian-python-api - Confluence API wrapper
databricks-sdk - Databricks SDK
Anthropic API - Claude AI integration
Version: 0.1.0
Python: 3.10+
License: MIT
Status: Production Ready ā
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Tools
Enables AI assistants to interact with GitHub repositories, Confluence documentation, and Databricks Unity Catalog through comprehensive tools for code exploration, documentation retrieval, and data schema management.