Mentioned as a compatible AI agent platform that can leverage the MCP server for fetching Zendesk context, with future enhancements planned for specific LangChain tool compatibility.
Supports OpenAI function calling with GPT-4 to automatically detect context needs and fetch relevant Zendesk data, enabling natural conversation flows with customer support context.
Integrates with Zendesk's REST APIs to fetch real-time customer and organization context, including user information, organization details, and ticket data based on provided IDs.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Zendesk MCP Serverget context for ticket 45678"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Zendesk MCP Server (Model Context Protocol)
This project is a lightweight, AI-native MCP (Model Context Protocol) server that integrates with Zendesk's REST APIs. It allows GPT-based AI agents (e.g. OpenAI, LangChain) to fetch real-time customer and organization context dynamically.
Features
Accepts
ticket_id,user_id, ororganization_idFetches user, org, and ticket context from Zendesk
Returns:
summary: human-readable LLM-friendly summaryprompt_context: single-line LLM embedding stringcontext: structured blocks (text, list)prompt_guidance: usage instructions and few-shot examples
Exposes:
/context: main context API/meta: MCP schema metadata/function-schema: OpenAI function-compatible definition
Fully Dockerized and deployable
Compatible with GPT-4 function calling
Related MCP server: Zendesk MCP Server
Getting Started
1. Clone and install dependencies
2. Set up .env
3. Run Locally
Visit:
http://localhost:3000/contexthttp://localhost:3000/metahttp://localhost:3000/function-schema
Docker Support
Build Image
Run Container
Function Calling with OpenAI (Example)
See openai-client.js for an example where:
GPT-4 automatically detects and calls
get_ticket_contextThe function calls your local MCP server
GPT writes a natural reply using the returned context
Simulating a Full Chat Conversation
What you've tested so far is GPT-4 calling your MCP server using function calling, which works. Now you want to simulate a full conversation where:
A user asks something natural like:
“Can you give me context for ticket 12345?”
GPT-4 figures out it needs to call
get_ticket_contextGPT-4 calls your MCP server automatically
GPT-4 uses the result to reply in a natural, chat-style response
Let’s build exactly that — your own OpenAI Agent Loop that mimics how GPT-4 with tools (functions) will behave in production.
✅ Step-by-Step: Full Chat-Based OpenAI Agent with Function Calling
✨ Final Output Looks Like:
What This Script Does:
Sends a natural user message to GPT-4
GPT-4 detects your function, calls it with a
ticket_idYou send that to your MCP server
Feed the MCP server’s context result back to GPT
GPT-4 writes a human-style response using the result
Web Chat Interface + OpenAI Router API
To demonstrate end-to-end usage with real input/output, this project includes:
1. /chat API endpoint (openai-router.js)
A Node.js API that accepts natural language messages, detects intent using GPT-4 + function calling, and uses the MCP server to fetch data and compose replies.
🔧 .env additions:
▶️ Run the API:
This starts a server at http://localhost:4000/chat
2. chat-ui.html
A simple HTML frontend to type user prompts and see AI-generated responses with Zendesk context.
🧪 Example Questions:
Who is the user for ticket 12345?
Tell me about organization 78901
How many tickets has user 112233 opened?
💬 Usage
Open
chat-ui.htmlin a browserEnsure the
/chatendpoint is running with CORS enabledAsk questions and see the result appear naturally
🔐 Note
Make sure you install and enable CORS in openai-router.js:
Future Enhancements
LangChain tool compatibility
Redis caching layer
Rate limiting
More context types:
/orders,/billing,/subscriptions
License
MIT
Author
Your Name — @yourhandle