Skip to main content
Glama
RadhaKrishna0018

netlinq-jenkins-mcp

netlinq-jenkins-mcp

A small Python service that wraps your private Jenkins controller and lets a team trigger the NetLinQ EMS Release pipeline and Patch Single Repository Pipeline jobs through natural language. One codebase, two run modes:

  1. MCP server (stdio) - plug into Cursor on your laptop and ask: "build 7.0 release package" or "rebuild blinq-ems-charts at tag 7.0.3".

  2. FastAPI web app + chat UI - one-command docker compose up on an internal server, the whole team logs in via browser and gets the same tools.

Hosting note: GitHub-hosted runners cannot reach a private Jenkins. The code lives in a private GitHub repo; the runtime runs wherever it has a network path to Jenkins (a teammate's laptop with VPN, or an internal Linux VM).


Table of contents


Architecture

flowchart LR
    subgraph github [Private GitHub Repo]
        repo[netlinq-jenkins-mcp]
    end

    subgraph local [Local laptop - DevOps user]
        cursor[Cursor IDE]
        mcp["FastMCP stdio server<br/>mcp_server.py"]
        cursor -->|stdio| mcp
    end

    subgraph shared [Internal VM - team]
        web["FastAPI web app<br/>web.py + Vite UI"]
        chat["Chat UI - browser"]
        chat -->|HTTPS basic auth| web
    end

    subgraph core [Shared Python core]
        tools["tools.py<br/>5 tool functions"]
        llm["llm.py<br/>LiteLLM router"]
        jc["jenkins_client.py<br/>httpx + crumb"]
    end

    repo -.git clone.-> local
    repo -.git clone.-> shared

    mcp --> tools
    web --> llm
    web --> tools
    llm -->|"tool calls"| tools
    tools --> jc
    jc -->|REST + basic auth| jenkins[(Jenkins<br/>private network)]

tools.py is the single source of truth. Both the MCP server and the LiteLLM agent in the web app call into the same five functions, so behavior is identical between Cursor and the team chat UI.

The five tools:

Tool

What it does

trigger_release_build(version)

Queues NetLinQ EMS Release pipeline for a version like 7.0

patch_repository(repo, tag)

Queues Patch Single Repository Pipeline for one repo at an existing tag

get_build_status(pipeline, build_number?)

Latest or specific build's result, duration, parameters

list_recent_builds(pipeline, limit?)

History (newest first)

tail_build_log(pipeline, build_number, n_lines?)

Last N lines of console output


Quickstart - MCP in Cursor

Full walkthrough: docs/CURSOR_MCP.md. Short version:

  1. Generate a Jenkins API token at <JENKINS_URL>/me/configure -> Add new Token.

  2. Install uv: pipx install uv

  3. Edit ~/.cursor/mcp.json (Windows: %USERPROFILE%\.cursor\mcp.json):

    {
      "mcpServers": {
        "netlinq-jenkins": {
          "command": "uvx",
          "args": [
            "--from",
            "git+ssh://git@github.com/<your-org>/netlinq-jenkins-mcp.git@main",
            "netlinq-jenkins-mcp"
          ],
          "env": {
            "JENKINS_URL": "https://jenkins.internal.example.com",
            "JENKINS_USER": "your-user",
            "JENKINS_TOKEN": "your-api-token"
          }
        }
      }
    }
  4. Restart Cursor. Look for the green dot next to netlinq-jenkins in Settings -> MCP.

  5. In the chat, try: "build 7.0 release package". The agent will confirm before actually triggering Jenkins.


Quickstart - team chat UI (Docker)

git clone git@github.com:<your-org>/netlinq-jenkins-mcp.git
cd netlinq-jenkins-mcp
cp .env.example .env
# edit .env: JENKINS_*, LLM_*, WEB_USERS

# Create at least one web user. The hash MUST be bcrypt-hashed.
python -c "from passlib.hash import bcrypt; print('alice:' + bcrypt.hash('secret123'))"
# paste the line into WEB_USERS=

docker compose up -d --build
# browse http://<host>:8000 - log in with alice / secret123

What the team sees:

  • Chat input at the bottom, conversation transcript in the middle.

  • Live "recent builds" panels for both pipelines on the right, polled every 5s.

  • Tool-call cards expand inline so people can see exactly what the bot is doing.

  • A "Reset" button on the header clears the agent's memory.


Local dev (no Docker)

# Python side
python -m venv .venv
.\.venv\Scripts\Activate.ps1     # PowerShell
# or:  source .venv/bin/activate  # bash
pip install -e ".[dev]"

# Frontend side (only needed for the web mode)
cd ui
npm install
npm run build       # writes ui/dist/, which web.py auto-serves
cd ..

# Run the web app
netlinq-jenkins-web
# or, with auto-reload:
uvicorn netlinq_jenkins.web:create_app --factory --reload --port 8000

# Or run as MCP (stdio - the way Cursor will spawn it)
netlinq-jenkins-mcp

# Run tests
pytest

Configuration reference

All settings come from environment variables (or a .env file). See .env.example for the canonical list.

Variable

Default

Purpose

JENKINS_URL

required

Base URL of the Jenkins controller

JENKINS_USER

required

Service-account username

JENKINS_TOKEN

required

API token (preferred) or password

JENKINS_CA_BUNDLE

empty

Path to a CA bundle for self-signed TLS, or false to skip verification

RELEASE_PIPELINE_NAME

NetLinQ EMS Release pipeline

Override if your job is named differently

RELEASE_VERSION_PARAM

VERSION

The job's version-parameter name

PATCH_PIPELINE_NAME

Patch Single Repository Pipeline

Patch pipeline name

PATCH_REPO_PARAM

REPO

Patch pipeline repo-parameter name

PATCH_TAG_PARAM

TAG

Patch pipeline tag-parameter name

LLM_PROVIDER

openai

Informational - LiteLLM picks based on LLM_MODEL

LLM_MODEL

gpt-4o

Any LiteLLM-supported model string

LLM_API_KEY

-

Provider key (web mode only)

LLM_API_BASE

-

For Azure / Ollama / self-hosted endpoints

WEB_HOST

0.0.0.0

FastAPI bind host

WEB_PORT

8000

FastAPI bind port

WEB_USERS

empty

user1:bcrypt-hash,user2:bcrypt-hash for web Basic auth

WEB_API_SHARED_SECRET

-

Optional X-API-Secret header value for /api/*

AUDIT_LOG_PATH

audit.jsonl

JSONL file every tool call is appended to


Discovering Jenkins parameter names

If VERSION / REPO / TAG aren't your real parameter names, ask Jenkins:

curl -s -u "$JENKINS_USER:$JENKINS_TOKEN" \
  "$JENKINS_URL/job/NetLinQ%20EMS%20Release%20pipeline/api/json?tree=property[parameterDefinitions[name,type,defaultParameterValue[value]]]" \
  | jq

Then override the relevant *_PARAM env var in .env or in your Cursor mcp.json.


LLM provider tips

The web mode uses LiteLLM so you can swap providers purely by env var. Common combos:

Provider

LLM_MODEL

LLM_API_KEY

LLM_API_BASE

OpenAI

gpt-4o

sk-...

-

Anthropic

claude-sonnet-4-5

sk-ant-...

-

Azure OpenAI

azure/<deployment>

Azure key

https://<resource>.openai.azure.com

Ollama (local)

ollama/llama3.1

-

http://localhost:11434

OpenAI-compatible

openai/<model>

key

https://your.host/v1

The MCP-in-Cursor mode does not need any of this - Cursor's own model drives the conversation and just calls our tools.


Security notes

  • .env is git-ignored - secrets never leave the host.

  • Web mode requires HTTP Basic auth (bcrypt-hashed in WEB_USERS).

  • Optional WEB_API_SHARED_SECRET adds a header-based second factor on /api/*, meant for "behind a reverse proxy" deployments.

  • No inbound internet traffic is required - the app only reaches out to Jenkins.

  • API tokens are preferred over passwords: tokens skip the CSRF crumb dance and are easier to revoke.

  • Inputs are validated with strict regexes (version, repo, tag) before any HTTP call goes out, so a chatty LLM cannot smuggle shell metacharacters.

  • Every tool invocation is appended to the audit log (see below).


Audit log

Every successful trigger writes a JSONL line to ${AUDIT_LOG_PATH}:

{"ts": "2026-05-06T20:30:11+00:00", "event": "trigger",
 "pipeline": "NetLinQ EMS Release pipeline",
 "parameters": {"VERSION": "7.0"},
 "queue_url": "https://jenkins.internal.example.com/queue/item/812/"}

In Docker mode the file is bind-mounted at ./logs/audit.jsonl on the host.


Project layout

netlinq-jenkins-mcp/
├── src/netlinq_jenkins/
│   ├── config.py          # pydantic-settings
│   ├── jenkins_client.py  # async httpx wrapper, crumb handling
│   ├── tools.py           # 5 tool functions, used by both modes
│   ├── llm.py             # LiteLLM tool-calling agent (web mode only)
│   ├── mcp_server.py      # FastMCP stdio entrypoint (Cursor)
│   └── web.py             # FastAPI app + serves the bundled UI
├── ui/                    # Vite + React + Tailwind chat UI
│   ├── src/App.tsx        # main chat layout
│   └── src/components/    # ToolCard, BuildsPanel
├── tests/                 # pytest + pytest-httpx
├── docs/CURSOR_MCP.md     # detailed Cursor integration guide
├── examples/cursor-mcp.json
├── Dockerfile             # multi-stage: builds UI, then Python wheel
├── docker-compose.yml
├── .env.example
└── pyproject.toml

Roadmap / next steps

  • OIDC / SSO instead of HTTP Basic for the web UI.

  • Slack-bot adapter that forwards /build 7.0 slash commands into the same tools.

  • Optional read-only mode (READ_ONLY=true) that disables the trigger tools.

  • WebSocket log tail in the UI instead of the polling sidebar.

  • Per-user audit log instead of one global file.

F
license - not found
-
quality - not tested
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RadhaKrishna0018/netlinq-jenkins-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server