Mentioned as a compatible AI agent platform that can leverage the MCP server for fetching Zendesk context, with future enhancements planned for specific LangChain tool compatibility.
Supports OpenAI function calling with GPT-4 to automatically detect context needs and fetch relevant Zendesk data, enabling natural conversation flows with customer support context.
Integrates with Zendesk's REST APIs to fetch real-time customer and organization context, including user information, organization details, and ticket data based on provided IDs.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Zendesk MCP Serverget context for ticket 45678"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Zendesk MCP Server (Model Context Protocol)
This project is a lightweight, AI-native MCP (Model Context Protocol) server that integrates with Zendesk's REST APIs. It allows GPT-based AI agents (e.g. OpenAI, LangChain) to fetch real-time customer and organization context dynamically.
Features
Accepts
ticket_id,user_id, ororganization_idFetches user, org, and ticket context from Zendesk
Returns:
summary: human-readable LLM-friendly summaryprompt_context: single-line LLM embedding stringcontext: structured blocks (text, list)prompt_guidance: usage instructions and few-shot examples
Exposes:
/context: main context API/meta: MCP schema metadata/function-schema: OpenAI function-compatible definition
Fully Dockerized and deployable
Compatible with GPT-4 function calling
Related MCP server: Zendesk MCP Server
Getting Started
1. Clone and install dependencies
git clone https://github.com/your-repo/zendesk-mcp-server
cd zendesk-mcp-server
npm install2. Set up .env
ZENDESK_DOMAIN=your-subdomain.zendesk.com
ZENDESK_EMAIL=your-email@yourdomain.com
ZENDESK_API_TOKEN=your_zendesk_api_token
PORT=30003. Run Locally
node index.jsVisit:
http://localhost:3000/contexthttp://localhost:3000/metahttp://localhost:3000/function-schema
Docker Support
Build Image
docker build -t zendesk-mcp .Run Container
docker run -p 3000:3000 \
-e ZENDESK_DOMAIN=your-subdomain.zendesk.com \
-e ZENDESK_EMAIL=your-email \
-e ZENDESK_API_TOKEN=your-token \
zendesk-mcpFunction Calling with OpenAI (Example)
See openai-client.js for an example where:
GPT-4 automatically detects and calls
get_ticket_contextThe function calls your local MCP server
GPT writes a natural reply using the returned context
Simulating a Full Chat Conversation
What you've tested so far is GPT-4 calling your MCP server using function calling, which works. Now you want to simulate a full conversation where:
A user asks something natural like:
“Can you give me context for ticket 12345?”
GPT-4 figures out it needs to call
get_ticket_contextGPT-4 calls your MCP server automatically
GPT-4 uses the result to reply in a natural, chat-style response
Let’s build exactly that — your own OpenAI Agent Loop that mimics how GPT-4 with tools (functions) will behave in production.
✅ Step-by-Step: Full Chat-Based OpenAI Agent with Function Calling
✨ Final Output Looks Like:
User: Can you give me context for ticket 12345?
GPT: Sure! Here's what I found:
Alice Smith is a Premium customer under Acme Corp. She submitted 3 tickets recently. The latest ticket is titled "Login timeout" and is currently open.What This Script Does:
Sends a natural user message to GPT-4
GPT-4 detects your function, calls it with a
ticket_idYou send that to your MCP server
Feed the MCP server’s context result back to GPT
GPT-4 writes a human-style response using the result
Web Chat Interface + OpenAI Router API
To demonstrate end-to-end usage with real input/output, this project includes:
1. /chat API endpoint (openai-router.js)
A Node.js API that accepts natural language messages, detects intent using GPT-4 + function calling, and uses the MCP server to fetch data and compose replies.
🔧 .env additions:
OPENAI_API_KEY=your_openai_key
MCP_SERVER_URL=http://localhost:3000
CHAT_PORT=4000▶️ Run the API:
node openai-router.jsThis starts a server at http://localhost:4000/chat
2. chat-ui.html
A simple HTML frontend to type user prompts and see AI-generated responses with Zendesk context.
🧪 Example Questions:
Who is the user for ticket 12345?
Tell me about organization 78901
How many tickets has user 112233 opened?
💬 Usage
Open
chat-ui.htmlin a browserEnsure the
/chatendpoint is running with CORS enabledAsk questions and see the result appear naturally
🔐 Note
Make sure you install and enable CORS in openai-router.js:
const cors = require('cors');
app.use(cors());Future Enhancements
LangChain tool compatibility
Redis caching layer
Rate limiting
More context types:
/orders,/billing,/subscriptions
License
MIT
Author
Your Name — @yourhandle
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.