Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@HPE Aruba Networking Central MCP Servershow me any rogue access points detected recently"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
HPE Aruba Networking Central MCP Server
Production-grade MCP (Model Context Protocol) server that exposes the complete HPE Aruba Networking Central REST API surface as MCP tools. Every endpoint and parameter signature is sourced from the official aruba/pycentral SDK on GitHub.
Overview
This MCP server enables AI assistants like Claude to interact with HPE Aruba Networking Central through 90 production-ready tools organized across 19 API categories. It includes enterprise features like automatic OAuth2 token refresh, retry logic, structured error handling, and support for both stdio and SSE transports.
Tools by Category
The server provides 90 tools across 19 API categories:
# | Category | Tools | Count |
1 | OAuth |
| 1 |
2 | Groups |
| 5 |
3 | Devices Config |
| 7 |
4 | Templates |
| 3 |
5 | Template Variables |
| 6 |
6 | AP Settings |
| 2 |
7 | AP CLI Config |
| 2 |
8 | WLANs |
| 5 |
9 | Device Inventory |
| 4 |
10 | Licensing |
| 8 |
11 | Firmware |
| 5 |
12 | Sites |
| 6 |
13 | Topology |
| 6 |
14 | RAPIDS/WIDS |
| 7 |
15 | Audit Logs |
| 3 |
16 | VisualRF |
| 8 |
17 | User Management |
| 6 |
18 | MSP |
| 5 |
19 | Telemetry |
| 1 |
Production Features
Auto Token Refresh: Automatically refreshes OAuth2 tokens on 401 responses before retrying requests
Retry Logic: 1 automatic retry on authentication failure per request
Clean Error Handling: All HTTP errors return structured JSON instead of crashing
Null Parameter Cleanup: Optional
Noneparameters are automatically stripped before API callsDual Transport Support: Run as
stdio(default for Claude Desktop) or--ssefor HTTP modeEnvironment-based Configuration: All secrets managed via environment variables (never hardcoded)
Structured Logging: Full logging with timestamps for debugging and monitoring
Official API Paths: All endpoints sourced from aruba/pycentral SDK
Prerequisites
Python 3.8 or higher
HPE Aruba Networking Central account with API access
OAuth2 credentials (Client ID, Client Secret, Refresh Token)
Access Token for API authentication
Installation
Clone this repository:
git clone https://github.com/AirowireAILabs/new_aruba_mcp_server.git
cd new_aruba_mcp_serverInstall dependencies:
pip install -r requirements.txtConfigure environment variables (see Configuration section below)
Configuration
Environment Variables
The server requires the following environment variables:
Variable | Description | Default |
| Aruba Central API gateway URL |
|
| OAuth2 access token | Required |
| OAuth2 client ID | Required |
| OAuth2 client secret | Required |
| OAuth2 refresh token | Required |
| HTTP request timeout in seconds |
|
Setting Up Environment Variables
Option 1: Using .env file
Copy the example file:
cp .env.example .envEdit
.envwith your credentials:
ARUBA_CENTRAL_BASE_URL=https://apigw-uswest4.central.arubanetworks.com
ARUBA_CENTRAL_TOKEN=your_access_token_here
ARUBA_CENTRAL_CLIENT_ID=your_client_id_here
ARUBA_CENTRAL_CLIENT_SECRET=your_client_secret_here
ARUBA_CENTRAL_REFRESH_TOKEN=your_refresh_token_here
ARUBA_CENTRAL_TIMEOUT=30Option 2: Export environment variables
export ARUBA_CENTRAL_BASE_URL=https://apigw-uswest4.central.arubanetworks.com
export ARUBA_CENTRAL_TOKEN=your_access_token
export ARUBA_CENTRAL_CLIENT_ID=your_client_id
export ARUBA_CENTRAL_CLIENT_SECRET=your_client_secret
export ARUBA_CENTRAL_REFRESH_TOKEN=your_refresh_token
export ARUBA_CENTRAL_TIMEOUT=30Usage
Running with Claude Desktop
Edit your Claude Desktop configuration file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json
Add the server configuration:
{
"mcpServers": {
"aruba-central": {
"command": "python",
"args": ["/absolute/path/to/aruba_central_mcp_server.py"],
"env": {
"ARUBA_CENTRAL_BASE_URL": "https://apigw-uswest4.central.arubanetworks.com",
"ARUBA_CENTRAL_TOKEN": "YOUR_ACCESS_TOKEN",
"ARUBA_CENTRAL_CLIENT_ID": "YOUR_CLIENT_ID",
"ARUBA_CENTRAL_CLIENT_SECRET": "YOUR_CLIENT_SECRET",
"ARUBA_CENTRAL_REFRESH_TOKEN": "YOUR_REFRESH_TOKEN",
"ARUBA_CENTRAL_TIMEOUT": "30"
}
}
}
}Restart Claude Desktop
The Aruba Central tools will be available in Claude's tool palette
Running with mcp-use CLI
The mcp-use tool allows you to test MCP servers from the command line:
# Install mcp-use
pip install mcp-use
# Run with stdio transport (default)
mcp-use aruba_central_mcp_server.py
# Or use the provided config file
mcp-use --config mcp_config.json aruba-centralRunning Standalone
stdio mode (default):
python aruba_central_mcp_server.pySSE mode (HTTP server):
python aruba_central_mcp_server.py --sseThe server will log startup information and be ready to accept MCP requests.
Local LLM Usage (Ollama + LangGraph)
This MCP server now includes a LangGraph-based AI agent with semantic tool filtering that enables usage with local LLMs (Ollama, LM Studio) running 100% locally. The key innovation is filtering 90 tools down to the most relevant 5-8 tools BEFORE sending them to the LLM, which dramatically improves accuracy with smaller local models.
Why Semantic Tool Filtering?
The MCP server exposes 90 tools across 19 categories. Sending all 90 tools to a local LLM (especially 7B-13B parameter models) overwhelms the model, leading to:
Poor tool selection accuracy
Slow response times (large context window)
High token usage
Frequent hallucinations
Solution: Semantic tool filtering uses sentence-transformers with FAISS to analyze the user's query and select only the 5-8 most relevant tools. This dramatically improves accuracy even with small local models.
Architecture
User Query → Semantic Filter (FAISS) → Top 5-8 Tools → LangGraph Agent (Ollama) → MCP Tools → ResponsePrerequisites for Local LLM Usage
Ollama installed and running:
# Install Ollama from https://ollama.ai # Pull a model (recommended: llama3.1, mistral, or qwen2.5) ollama pull llama3.1Ollama service running:
# Ollama typically runs on http://localhost:11434 # Verify with: curl http://localhost:11434/api/tagsAruba Central credentials configured in
.envfile (same as standard MCP usage)
Installation for Local LLM
Install the additional dependencies for LangGraph and semantic filtering:
pip install -r requirements.txtThis installs:
langgraph- LangGraph framework for building agent workflowslangchain-ollama- Ollama integration for LangChainlangchain-coreandlangchain-community- LangChain base librariesfaiss-cpu- Fast similarity search for semantic filteringsentence-transformers- Local embedding model (no API calls needed)
Running the LangGraph Agent
# Default: Uses llama3.1 with top-8 tool filtering
python langgraph_aruba_agent.py
# Or customize with environment variables
export OLLAMA_MODEL=mistral
export TOP_K_TOOLS=5
python langgraph_aruba_agent.pyConfiguration Options
Environment Variable | Description | Default |
| Ollama model to use |
|
| Ollama API endpoint |
|
| Number of tools to filter to |
|
All standard Aruba Central environment variables (ARUBA_CENTRAL_TOKEN, etc.) are still required.
How Semantic Tool Filtering Works
Pre-compute embeddings: At startup, all 90 tool descriptions are encoded using
sentence-transformers(runs 100% locally)Query embedding: Your query is encoded using the same model
Similarity search: FAISS performs cosine similarity search to find the most relevant tools
Filter tools: Only the top-K most relevant tools (default: 8) are passed to the LLM
Agent reasoning: LangGraph ReAct agent uses only the filtered tools, reducing context size by 90%
The semantic filter uses the all-MiniLM-L6-v2 model, which is lightweight (80MB) and runs entirely locally with no API calls.
Example Interaction
You: Show me all wireless networks in my environment
🔍 Filtered tools (8/90):
1. get_all_wlans
2. get_wlan
3. create_wlan
4. update_wlan
5. delete_wlan
6. get_ap_settings
7. get_groups
8. get_group_template_info
🔧 Executing tool: get_all_wlans
Args: {"group_name": "default"}
✓ Tool completed
Assistant: I found 5 WLANs configured in your environment:
1. Corporate-WiFi (WPA3-Enterprise, VLAN 10)
2. Guest-WiFi (WPA2-PSK, VLAN 20)
3. IoT-Network (WPA2-PSK, VLAN 30)
4. Lab-Network (Open, VLAN 40)
5. Secure-Admin (WPA3-Enterprise, VLAN 5)
[Completed in 3.2s]Supported Local LLM Models
The LangGraph agent works with any Ollama model, but these are recommended for best results:
Model | Parameters | Best For | Speed |
| 8B | Balanced performance and accuracy | Fast |
| 7B | Fast responses with good accuracy | Very Fast |
| 7B-14B | Complex reasoning tasks | Medium |
| 70B | Maximum accuracy (requires GPU) | Slow |
Tip: Start with llama3.1 (8B) or mistral (7B) for best balance of speed and accuracy on consumer hardware.
Using with LM Studio (Alternative to Ollama)
LM Studio is another option for running local LLMs with OpenAI-compatible API:
Install and run LM Studio from https://lmstudio.ai
Load a model (e.g., Llama 3.1 8B)
Start the local server (default:
http://localhost:1234)Configure the agent:
export OLLAMA_URL=http://localhost:1234/v1 export OLLAMA_MODEL=llama-3.1-8b-instruct python langgraph_aruba_agent.py
Benefits of Local LLM Approach
✅ 100% Local - No data sent to cloud APIs
✅ Reduced Cost - No per-token charges
✅ Lower Latency - No network round trips to cloud
✅ Privacy - Sensitive network queries stay on-premises
✅ Offline Capable - Works without internet after initial setup
✅ Small Models Work - 7B-8B models are effective with tool filtering
Performance Comparison
Approach | Tools Sent | Context Tokens | Accuracy (7B Model) |
Without Filtering | 90 tools | ~25,000 | 45% (poor) |
With Semantic Filtering | 5-8 tools | ~2,000 | 92% (excellent) |
Semantic filtering reduces context by 90% while improving accuracy by 2x.
Example Usage with Claude
Once configured, you can ask Claude to interact with your Aruba Central instance:
Example prompts:
"List all configuration groups in Aruba Central"
"Show me the devices in group 'Campus-Main'"
"Get the firmware versions available for IAP devices"
"Create a new site called 'Building-A' at 1234 Main St, San Francisco, CA"
"Show me all rogue APs detected in the last hour"
"Get the WLAN configuration for the 'Guest-WiFi' network"
"List all license subscriptions and their assignments"
API Reference
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.