Skip to main content
Glama

MPO MCP Server

by bsangars

MPO MCP Server

A comprehensive Model Context Protocol (MCP) server built with FastMCP that provides powerful integrations with GitHub repositories, Confluence documentation, and Databricks Unity Catalog.

šŸš€ Built with FastMCP! This server leverages FastMCP, a modern, decorator-based framework for building MCP servers with minimal boilerplate.

šŸ“š Table of Contents

Overview

MPO MCP Server enables AI assistants and LLMs to interact seamlessly with your development and data ecosystem. It exposes a comprehensive set of tools through the Model Context Protocol, allowing intelligent agents to:

  • GitHub: Browse repositories, search code, read files, manage branches and pull requests

  • Confluence: Search and retrieve documentation, list spaces and pages

  • Databricks: Query Unity Catalog metadata, execute SQL queries, explore data schemas

The server is built with a modular architecture, allowing you to configure only the services you need.

Features

šŸ”§ Flexible Configuration

  • Modular Design: Enable only the services you need (GitHub, Confluence, Databricks, or any combination)

  • Environment-based: Simple .env file configuration with validation

  • Secure: API tokens and credentials managed through environment variables

šŸš€ Multiple Usage Modes

  1. Interactive LLM Assistant: Natural language interface with autonomous tool selection

  2. MCP Server: Direct integration with Claude Desktop and other MCP clients

  3. Command-Line Interface: Direct tool invocation via CLI

šŸ“Š Comprehensive Tool Set

  • 18 GitHub Tools: Complete repository management and code exploration

  • 5 Confluence Tools: Full documentation search and retrieval

  • 10 Databricks Tools: Complete Unity Catalog metadata and SQL execution

Installation

Prerequisites

  • Python 3.10 or higher

  • pip or uv for package management

  • API credentials for the services you want to use

Quick Setup

  1. Clone the repository:

cd /Users/bsang2/Desktop/mcp_demo/mpo-mcp
  1. Install dependencies:

pip install -r requirements.txt

Or using uv (faster):

uv pip install -r requirements.txt
  1. Create configuration file:

cp .env.example .env # If example exists # Or create .env manually
  1. Add your credentials to .env (see Configuration)

Package Installation

You can also install as a package:

pip install -e .

This enables the command-line tools:

  • mpo-mcp-server: Run the MCP server

  • mpo: Command-line interface

Configuration

Environment Variables

Create a .env file in the project root with your credentials:

# ============================================ # Anthropic Configuration (for LLM Assistant) # ============================================ ANTHROPIC_API_KEY=your_anthropic_api_key_here # ============================================ # GitHub Configuration # ============================================ GITHUB_TOKEN=your_github_token_here GITHUB_ORG=your_default_org_or_username # ============================================ # Confluence Configuration # ============================================ CONFLUENCE_URL=https://your-domain.atlassian.net CONFLUENCE_USERNAME=your_email@example.com CONFLUENCE_API_TOKEN=your_confluence_api_token CONFLUENCE_SPACE_KEY=your_default_space_key # ============================================ # Databricks Configuration # ============================================ DATABRICKS_HOST=https://your-workspace.databricks.com DATABRICKS_TOKEN=your_databricks_token DATABRICKS_CATALOG=your_default_catalog DATABRICKS_WAREHOUSE_ID=your_sql_warehouse_id

Getting API Credentials

Anthropic API Key (for Interactive LLM Assistant)

  1. Visit console.anthropic.com

  2. Sign up or log in

  3. Navigate to API Keys

  4. Create a new API key

  5. Copy to .env file

GitHub Personal Access Token

  1. Go to GitHub Settings → Developer settings → Personal access tokens → Tokens (classic)

  2. Generate new token with scopes:

    • repo (for private repositories)

    • read:org (for organization data)

    • user (for user data)

  3. Copy token to .env file

Confluence API Token

  1. Visit id.atlassian.com/manage-profile/security/api-tokens

  2. Create API token

  3. Use your Atlassian account email as username

  4. Copy token to .env file

Databricks Access Token

  1. Go to your Databricks workspace

  2. Click User Settings → Developer

  3. Manage Access tokens → Generate new token

  4. Set expiration and comment

  5. Copy token to .env file

Service Validation

The server automatically validates configurations at startup:

  • Tools are only exposed for properly configured services

  • Partial configuration is supported (e.g., GitHub only)

  • Clear error messages for missing credentials

Usage

Method 1: Interactive LLM Assistant (Recommended) šŸ¤–

The easiest way to use the server - a conversational interface that autonomously selects and uses tools:

python llm_assistant.py

Features:

  • Natural language queries

  • Autonomous tool selection

  • Context-aware responses

  • Conversation history

  • Follow-up questions

Example Session:

šŸ’¬ You: What are the most popular repositories from nike-goal-analytics-mpo? šŸ¤– Assistant: [Analyzes and calls github_list_repositories] Here are Facebook's top repositories: 1. React - 210K stars... šŸ’¬ You: Show me the README from the React repository šŸ¤– Assistant: [Calls github_get_file_contents] Here's the React README... šŸ’¬ You: Search for "useState" in that repo šŸ¤– Assistant: [Calls github_search_code] Found 147 results for "useState"...

Requirements: Set ANTHROPIC_API_KEY in .env

See docs/GETTING_STARTED_LLM_ASSISTANT.md for detailed documentation.

Method 2: MCP Server (For Claude Desktop & Other Clients)

Run the server to expose tools via the Model Context Protocol:

python -m mpo_mcp.server

Or if installed as package:

mpo-mcp-server

Integration with Claude Desktop

Add to your Claude Desktop configuration:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

Option 1: Using .env file (Recommended)

{ "mcpServers": { "mpo-mcp": { "command": "python", "args": ["-m", "mpo_mcp.server"], "cwd": "/Users/bsang2/Desktop/mcp_demo/mpo-mcp" } } }

Option 2: Explicit environment variables

{ "mcpServers": { "mpo-mcp": { "command": "python", "args": ["-m", "mpo_mcp.server"], "cwd": "/Users/bsang2/Desktop/mcp_demo/mpo-mcp", "env": { "GITHUB_TOKEN": "your_token", "GITHUB_ORG": "your_org", "CONFLUENCE_URL": "https://your-domain.atlassian.net", "CONFLUENCE_USERNAME": "your_email@example.com", "CONFLUENCE_API_TOKEN": "your_token", "CONFLUENCE_SPACE_KEY": "your_space", "DATABRICKS_HOST": "https://your-workspace.databricks.com", "DATABRICKS_TOKEN": "your_token", "DATABRICKS_CATALOG": "your_catalog", "DATABRICKS_WAREHOUSE_ID": "your_warehouse_id" } } } }

See docs/CURSOR_MCP_SETUP.md for Cursor AI integration.

Method 3: Command-Line Interface

Direct tool invocation via CLI:

# GitHub commands mpo github repos --org nike-goal-analytics-mpo --limit 5 mpo github repo --name nike-goal-analytics-mpo/msc-dft-monorepo mpo github search --query "useState" --repo nike-goal-analytics-mpo/msc-dft-monorepo mpo github file --repo nike-goal-analytics-mpo/msc-dft-monorepo --path README.md mpo github branches --repo nike-goal-analytics-mpo/msc-dft-monorepo mpo github prs --repo nike-goal-analytics-mpo/msc-dft-monorepo --state open # Confluence commands mpo confluence spaces --limit 10 mpo confluence pages --space DOCS --limit 20 mpo confluence page --id 123456789 mpo confluence search --query "architecture" --space TECH mpo confluence page-by-title --title "Getting Started" # Databricks commands mpo databricks catalogs mpo databricks schemas --catalog main mpo databricks tables --catalog main --schema default mpo databricks schema --catalog main --schema default --table users mpo databricks search --query customer --catalog main mpo databricks catalog --name main mpo databricks query --sql "SELECT * FROM main.default.users LIMIT 10" mpo databricks warehouses # Help mpo --help mpo github --help mpo confluence --help mpo databricks --help

See docs/CLI_GUIDE.md and docs/CLI_EXAMPLES.md for comprehensive CLI documentation.

Available Tools

GitHub Tools (6 tools)

1. github_list_repositories

List repositories for a user or organization.

Parameters:

  • org (optional): Organization or username (defaults to GITHUB_ORG)

  • limit (default: 30): Maximum number of repositories

Returns: List of repositories with name, description, stars, forks, language, etc.

Example:

{ "org": "nike-goal-analytics-mpo", "limit": 10 }

2. github_get_repository_info

Get detailed information about a specific repository.

Parameters:

  • repo_name (required): Full repository name (e.g., "nike-goal-analytics-mpo/msc-dft-monorepo")

Returns: Detailed repository metadata including stars, forks, topics, license, etc.

Example:

{ "repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo" }

3. github_search_code

Search for code across GitHub repositories.

Parameters:

  • query (required): Search query

  • repo (optional): Limit search to specific repository

  • limit (default: 10): Maximum results

Returns: List of code matches with file paths and URLs

Example:

{ "query": "useState", "repo": "nike-goal-analytics-mpo/msc-dft-monorepo", "limit": 5 }

4. github_get_file_contents

Read file contents from a repository.

Parameters:

  • repo_name (required): Full repository name

  • file_path (required): Path to file

  • ref (optional): Branch, tag, or commit SHA

Returns: File contents and metadata

Example:

{ "repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo", "file_path": "README.md" }

5. github_list_branches

List branches in a repository.

Parameters:

  • repo_name (required): Full repository name

  • limit (default: 20): Maximum branches

Returns: List of branches with protection status and commit SHA

Example:

{ "repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo", "limit": 10 }

6. github_get_pull_requests

Retrieve pull requests for a repository.

Parameters:

  • repo_name (required): Full repository name

  • state (default: "open"): PR state ("open", "closed", or "all")

  • limit (default: 20): Maximum PRs

Returns: List of pull requests with status, author, dates, etc.

Example:

{ "repo_name": "nike-goal-analytics-mpo/msc-dft-monorepo", "state": "open", "limit": 10 }

Confluence Tools (5 tools)

1. confluence_list_pages

List pages in a Confluence space.

Parameters:

  • space_key (optional): Space key (defaults to CONFLUENCE_SPACE_KEY)

  • limit (default: 25): Maximum pages

Returns: List of pages with titles, IDs, and URLs

Example:

{ "space_key": "DOCS", "limit": 20 }

2. confluence_get_page_content

Get full content of a Confluence page.

Parameters:

  • page_id (required): Page ID

Returns: Page content with metadata, version info, and HTML/storage content

Example:

{ "page_id": "123456789" }

3. confluence_search_pages

Search for pages across Confluence.

Parameters:

  • query (required): Search query

  • space_key (optional): Limit to specific space

  • limit (default: 20): Maximum results

Returns: Search results with excerpts and relevance

Example:

{ "query": "API documentation", "space_key": "TECH", "limit": 10 }

4. confluence_get_page_by_title

Find a page by its exact title.

Parameters:

  • title (required): Page title

  • space_key (optional): Space key (defaults to CONFLUENCE_SPACE_KEY)

Returns: Page content and metadata

Example:

{ "title": "Getting Started Guide", "space_key": "DOCS" }

5. confluence_list_spaces

List available Confluence spaces.

Parameters:

  • limit (default: 25): Maximum spaces

Returns: List of spaces with keys, names, and URLs

Example:

{ "limit": 10 }

Databricks Tools (10 tools)

1. databricks_list_catalogs

List all Unity Catalog catalogs.

Parameters: None

Returns: List of catalogs with names, owners, storage roots

Example:

{}

2. databricks_list_schemas

List schemas in a catalog.

Parameters:

  • catalog_name (optional): Catalog name (defaults to DATABRICKS_CATALOG)

Returns: List of schemas with full names and metadata

Example:

{ "catalog_name": "main" }

3. databricks_list_tables

List tables in a schema.

Parameters:

  • schema_name (required): Schema name

  • catalog_name (optional): Catalog name (defaults to DATABRICKS_CATALOG)

Returns: List of tables with names, types, formats, and locations

Example:

{ "catalog_name": "main", "schema_name": "default" }

4. databricks_get_table_schema

Get detailed schema for a table.

Parameters:

  • table_name (required): Table name

  • schema_name (required): Schema name

  • catalog_name (optional): Catalog name (defaults to DATABRICKS_CATALOG)

Returns: Complete table schema with columns, types, and properties

Example:

{ "table_name": "users", "catalog_name": "main", "schema_name": "default" }

5. databricks_search_tables

Search for tables by name pattern.

Parameters:

  • query (required): Search query (table name pattern)

  • catalog_name (optional): Limit to specific catalog

  • max_results (default: 50): Maximum results

Returns: List of matching tables

Example:

{ "query": "customer", "catalog_name": "main", "max_results": 20 }

6. databricks_get_catalog_info

Get detailed catalog information.

Parameters:

  • catalog_name (required): Catalog name

Returns: Catalog metadata including properties and configuration

Example:

{ "catalog_name": "main" }

7. databricks_get_schema_info

Get detailed schema information.

Parameters:

  • catalog_name (required): Catalog name

  • schema_name (required): Schema name

Returns: Schema metadata and properties

Example:

{ "catalog_name": "main", "schema_name": "default" }

8. databricks_execute_query

Execute a SQL query on Databricks.

Parameters:

  • query (required): SQL query to execute

  • catalog_name (optional): Catalog context (defaults to DATABRICKS_CATALOG)

  • warehouse_id (optional): SQL warehouse ID (defaults to DATABRICKS_WAREHOUSE_ID)

Returns: Query results with columns and data rows

Example:

{ "query": "SELECT * FROM main.default.users LIMIT 10", "catalog_name": "main", "warehouse_id": "abc123def456" }

9. databricks_list_warehouses

List available SQL warehouses.

Parameters: None

Returns: List of SQL warehouses with IDs, names, states, and configurations

Example:

{}

10. databricks_list_sql_warehouses

Alias for databricks_list_warehouses.

Documentation

Comprehensive documentation is available in the docs/ directory:

Getting Started

Tools & CLI

FastMCP

Architecture & Concepts

Development

Project Structure

mpo-mcp/ ā”œā”€ā”€ mpo_mcp/ # Main package │ ā”œā”€ā”€ __init__.py # Package initialization │ ā”œā”€ā”€ server.py # FastMCP server implementation │ ā”œā”€ā”€ config.py # Configuration management │ ā”œā”€ā”€ github_tools.py # GitHub integration (6 tools) │ ā”œā”€ā”€ confluence_tools.py # Confluence integration (5 tools) │ ā”œā”€ā”€ databricks_tools.py # Databricks integration (10 tools) │ └── cli.py # Command-line interface ā”œā”€ā”€ docs/ # Comprehensive documentation ā”œā”€ā”€ llm_assistant.py # Interactive LLM assistant ā”œā”€ā”€ example_usage.py # Usage examples ā”œā”€ā”€ quick_query.py # Quick query utility ā”œā”€ā”€ requirements.txt # Python dependencies ā”œā”€ā”€ pyproject.toml # Package configuration ā”œā”€ā”€ .env # Environment variables (not in git) ā”œā”€ā”€ .gitignore # Git ignore rules └── README.md # This file

Adding New Tools

  1. Implement the tool in the appropriate tools file:

# In mpo_mcp/github_tools.py async def new_github_feature(self, param: str) -> Dict[str, Any]: """ Description of the new feature. Args: param: Parameter description Returns: Result description """ # Implementation pass
  1. Register the tool in server.py:

@mcp.tool() async def github_new_feature(param: str) -> dict: """Tool description for MCP clients. Args: param: Parameter description """ return await github_tools.new_github_feature(param=param)
  1. Add CLI command in cli.py (optional):

@github_group.command() @click.option('--param', required=True, help='Parameter description') def new_feature(param: str): """Command description.""" result = asyncio.run(github_tools.new_github_feature(param=param)) click.echo(json.dumps(result, indent=2))

Testing Tools

You can test individual tools programmatically:

import asyncio from mpo_mcp.github_tools import GitHubTools async def test(): tools = GitHubTools() repos = await tools.list_repositories(org="nike-goal-analytics-mpo", limit=5) print(repos) asyncio.run(test())

Code Quality

  • Type hints: All functions use type hints

  • Docstrings: Comprehensive docstrings for all public methods

  • Error handling: Graceful error handling with informative messages

  • Logging: Structured logging throughout

Dependencies

Core dependencies:

  • fastmcp>=0.1.0 - MCP server framework

  • PyGithub>=2.1.1 - GitHub API client

  • atlassian-python-api>=3.41.0 - Confluence API client

  • databricks-sdk>=0.18.0 - Databricks API client

  • python-dotenv>=1.0.0 - Environment variable management

  • anthropic>=0.39.0 - Anthropic API for LLM assistant

See requirements.txt for complete list.

Troubleshooting

Server Not Starting

Issue: Server fails to start or shows import errors

Solutions:

  1. Verify Python version: python --version (must be 3.10+)

  2. Reinstall dependencies: pip install -r requirements.txt --force-reinstall

  3. Check for conflicting packages: pip list | grep mcp

  4. Verify virtual environment: which python

Tools Not Appearing

Issue: Expected tools don't show up in MCP client

Solutions:

  1. Check configuration validation in server logs

  2. Verify credentials in .env file

  3. Ensure .env is in correct location (project root)

  4. Check environment variables are loaded: python -c "from mpo_mcp.config import Config; print(Config.validate_github())"

  5. Restart the MCP client after configuration changes

API Authentication Errors

GitHub:

  • Verify token has correct scopes (repo, read:org)

  • Check token hasn't expired

  • Test token: curl -H "Authorization: token YOUR_TOKEN" https://api.github.com/user

Confluence:

  • Verify URL format (must include https://)

  • Check API token is valid (not password)

  • Ensure username is email address

  • Test: curl -u email@example.com:API_TOKEN https://your-domain.atlassian.net/wiki/rest/api/space

Databricks:

  • Verify workspace URL is correct

  • Check token hasn't expired

  • Ensure token has appropriate permissions

  • Test: curl -H "Authorization: Bearer YOUR_TOKEN" https://your-workspace.databricks.com/api/2.0/unity-catalog/catalogs

Rate Limiting

GitHub:

  • Authenticated requests: 5,000 requests/hour

  • Search API: 30 requests/minute

  • Use limit parameters to reduce API calls

Confluence:

  • Cloud: Rate limits vary by plan

  • Implement exponential backoff for production use

Databricks:

  • Check workspace quotas

  • Use connection pooling for multiple queries

Claude Desktop Integration Issues

Issue: Tools not appearing in Claude Desktop

Solutions:

  1. Verify config file location:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

    • Windows: %APPDATA%\Claude\claude_desktop_config.json

  2. Check JSON syntax is valid

  3. Verify cwd path is absolute and correct

  4. Restart Claude Desktop after config changes

  5. Check Claude Desktop logs for errors

LLM Assistant Issues

Issue: Assistant not responding or showing errors

Solutions:

  1. Verify ANTHROPIC_API_KEY is set correctly

  2. Check API key has sufficient credits

  3. Ensure FastMCP server can start independently

  4. Review error messages in console output

Connection Issues

Issue: Tools timing out or failing to connect

Solutions:

  1. Check network connectivity

  2. Verify firewall rules allow outbound HTTPS

  3. Test API endpoints directly with curl

  4. Check proxy settings if behind corporate firewall

  5. Increase timeout values if on slow connection

Debugging Tips

  1. Enable verbose logging:

import logging logging.basicConfig(level=logging.DEBUG)
  1. Test configuration:

python -c "from mpo_mcp.config import Config; print(f'GitHub: {Config.validate_github()}, Confluence: {Config.validate_confluence()}, Databricks: {Config.validate_databricks()}')"
  1. Run server with logging:

python -m mpo_mcp.server 2>&1 | tee server.log
  1. Test individual tools:

mpo github repos --org nike-goal-analytics-mpo --limit 1 mpo confluence spaces --limit 1 mpo databricks catalogs

Getting Help

If you encounter issues not covered here:

  1. Check the relevant documentation in docs/

  2. Review server logs for detailed error messages

  3. Verify all credentials are correctly configured

  4. Test API endpoints independently

  5. Check you have appropriate permissions for each service

License

This project is provided as-is for demonstration and integration purposes.

Contributing

Contributions are welcome! Please ensure:

  • Code follows existing style and conventions

  • All functions have type hints and docstrings

  • New tools are properly registered

  • Documentation is updated accordingly

Acknowledgments

Built with:


Version: 0.1.0
Python: 3.10+
License: MIT
Status: Production Ready āœ…

Deploy Server
-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Enables AI assistants to interact with GitHub repositories, Confluence documentation, and Databricks Unity Catalog through comprehensive tools for code exploration, documentation retrieval, and data schema management.

  1. šŸ“š Table of Contents
    1. Overview
      1. Features
        1. šŸ”§ Flexible Configuration
        2. šŸš€ Multiple Usage Modes
        3. šŸ“Š Comprehensive Tool Set
      2. Installation
        1. Prerequisites
        2. Quick Setup
        3. Package Installation
      3. Configuration
        1. Environment Variables
        2. Getting API Credentials
        3. Service Validation
      4. Usage
        1. Method 1: Interactive LLM Assistant (Recommended) šŸ¤–
        2. Method 2: MCP Server (For Claude Desktop & Other Clients)
        3. Method 3: Command-Line Interface
      5. Available Tools
        1. GitHub Tools (6 tools)
        2. Confluence Tools (5 tools)
        3. Databricks Tools (10 tools)
      6. Documentation
        1. Getting Started
        2. Tools & CLI
        3. FastMCP
        4. Architecture & Concepts
      7. Development
        1. Project Structure
        2. Adding New Tools
        3. Testing Tools
        4. Code Quality
        5. Dependencies
      8. Troubleshooting
        1. Server Not Starting
        2. Tools Not Appearing
        3. API Authentication Errors
        4. Rate Limiting
        5. Claude Desktop Integration Issues
        6. LLM Assistant Issues
        7. Connection Issues
        8. Debugging Tips
        9. Getting Help
      9. License
        1. Contributing
          1. Acknowledgments

            MCP directory API

            We provide all the information about MCP servers via our MCP API.

            curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsangars/mcp'

            If you have feedback or need assistance with the MCP directory API, please join our Discord server