Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@watsonx MCP Servergenerate a summary of this quarterly report using Granite"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
watsonx MCP Server
MCP server for IBM watsonx.ai integration with Claude Code. Enables Claude to delegate tasks to IBM's foundation models (Granite, Llama, Mistral, etc.).
Features
Text Generation - Generate text using watsonx.ai foundation models
Chat - Have conversations with watsonx.ai chat models
Embeddings - Generate text embeddings
Model Listing - List all available foundation models
Available Tools
Tool | Description |
| Generate text using watsonx.ai models |
| Chat with watsonx.ai models |
| Generate text embeddings |
| List available models |
Setup
1. Install Dependencies
2. Configure Environment
Set these environment variables:
Note: Either WATSONX_SPACE_ID or WATSONX_PROJECT_ID is required for text generation, embeddings, and chat. Deployment spaces are recommended as they have Watson Machine Learning (WML) pre-configured.
3. Add to Claude Code
The MCP server is already configured in ~/.claude.json:
Usage
Once configured, Claude can use watsonx.ai tools:
Available Models
Some notable models available:
ibm/granite-3-3-8b-instruct- IBM Granite 3.3 8B (recommended)ibm/granite-13b-chat-v2- IBM Granite chat modelibm/granite-3-8b-instruct- Granite 3 instruct modelmeta-llama/llama-3-70b-instruct- Meta's Llama 3 70Bmistralai/mistral-large- Mistral AI large modelibm/slate-125m-english-rtrvr-v2- Embedding model
Use watsonx_list_models to see all available models.
Architecture
Two-Agent System
This enables a two-agent architecture where:
Claude (Opus 4.5) - Primary reasoning agent, handles complex tasks
watsonx.ai - Secondary agent for specific workloads
Claude can delegate tasks to watsonx.ai when:
IBM-specific model capabilities are needed
Running batch inference on enterprise data
Using specialized Granite models
Generating embeddings for RAG pipelines
IBM Cloud Resources
This MCP server uses:
Service: watsonx.ai Studio (data-science-experience)
Plan: Lite (free tier)
Region: us-south
Create your own watsonx.ai project and deployment space in IBM Cloud.
Integration with IBM Z MCP Server
This watsonx MCP server works alongside the IBM Z MCP server:
Demo scripts in the ibmz-mcp-server:
demo-full-stack.js- Full 5-service pipelinedemo-rag.js- RAG with watsonx embeddings + Granite
Document Analyzer
The document analyzer (document-analyzer.js) provides powerful tools for analyzing your external drive data using watsonx.ai:
Commands
Features
Summarization: Generate concise summaries of any document
Analysis: Extract document type, topics, entities, and sentiment
Q&A: Ask natural language questions about document content
Embeddings: Generate 768-dimensional vectors for semantic search
Semantic Search: Find similar documents using vector similarity
Demo
Run the full demo:
Embedding Index & RAG
The embedding-index.js tool provides semantic search and RAG (Retrieval Augmented Generation):
Batch Processor
The batch-processor.js tool processes multiple documents at once:
Categories: technical, business, creative, personal, code, legal, marketing, educational, other
Files
index.js- MCP server implementationdocument-analyzer.js- Document analysis CLI toolembedding-index.js- Embedding index and RAG toolbatch-processor.js- Batch document processordemo-external-drive.sh- Demo scriptpackage.json- DependenciesREADME.md- This file
Author
Matthew Karsten
License
MIT