Skip to main content
Glama

Zendesk MCP Server

by larryfang

Zendesk MCP Server (Model Context Protocol)

This project is a lightweight, AI-native MCP (Model Context Protocol) server that integrates with Zendesk's REST APIs. It allows GPT-based AI agents (e.g. OpenAI, LangChain) to fetch real-time customer and organization context dynamically.


Features

  • Accepts ticket_id, user_id, or organization_id
  • Fetches user, org, and ticket context from Zendesk
  • Returns:
    • summary: human-readable LLM-friendly summary
    • prompt_context: single-line LLM embedding string
    • context: structured blocks (text, list)
    • prompt_guidance: usage instructions and few-shot examples
  • Exposes:
    • /context: main context API
    • /meta: MCP schema metadata
    • /function-schema: OpenAI function-compatible definition
  • Fully Dockerized and deployable
  • Compatible with GPT-4 function calling

Getting Started

1. Clone and install dependencies

git clone https://github.com/your-repo/zendesk-mcp-server cd zendesk-mcp-server npm install

2. Set up .env

ZENDESK_DOMAIN=your-subdomain.zendesk.com ZENDESK_EMAIL=your-email@yourdomain.com ZENDESK_API_TOKEN=your_zendesk_api_token PORT=3000

3. Run Locally

node index.js

Visit:

  • http://localhost:3000/context
  • http://localhost:3000/meta
  • http://localhost:3000/function-schema

Docker Support

Build Image

docker build -t zendesk-mcp .

Run Container

docker run -p 3000:3000 \ -e ZENDESK_DOMAIN=your-subdomain.zendesk.com \ -e ZENDESK_EMAIL=your-email \ -e ZENDESK_API_TOKEN=your-token \ zendesk-mcp

Function Calling with OpenAI (Example)

See openai-client.js for an example where:

  • GPT-4 automatically detects and calls get_ticket_context
  • The function calls your local MCP server
  • GPT writes a natural reply using the returned context

Simulating a Full Chat Conversation

What you've tested so far is GPT-4 calling your MCP server using function calling, which works. Now you want to simulate a full conversation where:

A user asks something natural like:

“Can you give me context for ticket 12345?”

  • GPT-4 figures out it needs to call get_ticket_context
  • GPT-4 calls your MCP server automatically
  • GPT-4 uses the result to reply in a natural, chat-style response

Let’s build exactly that — your own OpenAI Agent Loop that mimics how GPT-4 with tools (functions) will behave in production.

✅ Step-by-Step: Full Chat-Based OpenAI Agent with Function Calling

✨ Final Output Looks Like:
User: Can you give me context for ticket 12345? GPT: Sure! Here's what I found: Alice Smith is a Premium customer under Acme Corp. She submitted 3 tickets recently. The latest ticket is titled "Login timeout" and is currently open.

What This Script Does:

  • Sends a natural user message to GPT-4
  • GPT-4 detects your function, calls it with a ticket_id
  • You send that to your MCP server
  • Feed the MCP server’s context result back to GPT
  • GPT-4 writes a human-style response using the result

Web Chat Interface + OpenAI Router API

To demonstrate end-to-end usage with real input/output, this project includes:

1. /chat API endpoint (openai-router.js)

A Node.js API that accepts natural language messages, detects intent using GPT-4 + function calling, and uses the MCP server to fetch data and compose replies.

🔧 .env additions:
OPENAI_API_KEY=your_openai_key MCP_SERVER_URL=http://localhost:3000 CHAT_PORT=4000
▶️ Run the API:
node openai-router.js

This starts a server at http://localhost:4000/chat

2. chat-ui.html

A simple HTML frontend to type user prompts and see AI-generated responses with Zendesk context.

🧪 Example Questions:
  • Who is the user for ticket 12345?
  • Tell me about organization 78901
  • How many tickets has user 112233 opened?
💬 Usage
  • Open chat-ui.html in a browser
  • Ensure the /chat endpoint is running with CORS enabled
  • Ask questions and see the result appear naturally
🔐 Note

Make sure you install and enable CORS in openai-router.js:

const cors = require('cors'); app.use(cors());

Future Enhancements

  • LangChain tool compatibility
  • Redis caching layer
  • Rate limiting
  • More context types: /orders, /billing, /subscriptions

License

MIT


Author

Your Name — @yourhandle

-
security - not tested
F
license - not found
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A lightweight, AI-native server that enables GPT-based AI agents to fetch real-time customer and organization context from Zendesk APIs dynamically.

  1. Features
    1. Getting Started
      1. 1. Clone and install dependencies
      2. 2. Set up .env
      3. 3. Run Locally
    2. Docker Support
      1. Build Image
      2. Run Container
    3. Function Calling with OpenAI (Example)
      1. Simulating a Full Chat Conversation
      2. ✅ Step-by-Step: Full Chat-Based OpenAI Agent with Function Calling
      3. What This Script Does:
    4. Web Chat Interface + OpenAI Router API
      1. 1. /chat API endpoint (openai-router.js)
      2. 2. chat-ui.html
    5. Future Enhancements
      1. License
        1. Author

          Related MCP Servers

          • -
            security
            A
            license
            -
            quality
            This server provides a comprehensive integration with Zendesk. Retrieving and managing tickets and comments. Ticket analyzes and response drafting. Access to help center articles as knowledge base.
            Last updated -
            23
            Python
            Apache 2.0
          • -
            security
            F
            license
            -
            quality
            Provides browser automation capabilities through an API endpoint that interprets natural language commands to perform web tasks using OpenAI's GPT models.
            Last updated -
            Python
          • A
            security
            A
            license
            A
            quality
            A server implementation that provides Claude AI with the ability to interact with Zendesk ticketing systems through various functions including retrieving, searching, creating, and updating tickets.
            Last updated -
            7
            84
            4
            TypeScript
            MIT License
            • Apple
          • -
            security
            F
            license
            -
            quality
            A server that allows AI agents to consult multiple large language models (like Grok, Gemini, Claude, GPT-4o) through Model Context Protocol for assistance and information.
            Last updated -
            4
            JavaScript

          View all related MCP servers

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/larryfang/mcp'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server