THU Agent by CyberCraze
Integrates with OpenAI-compatible APIs via the THU lab proxy, providing access to models like DeepSeek, Qwen, GLM, Kimi, and MiniMax for interactive terminal coding assistance and conversational AI capabilities.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@THU Agent by CyberCrazeinspect this project and explain how to run it"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
THU CyberCraze Agent
Interactive terminal coding agent powered by the THU lab proxy (OpenAI-compatible API). It runs in your current terminal, works in your current directory, can inspect files, propose shell commands, and wait for your approval before running them.
1. Installation
1.1 Get an API key
Create a key here:
https://lab.cs.tsinghua.edu.cn/ai-platform/c/newBase URL:
https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1Set environment variables:
export THU_LAB_PROXY_API_KEY='your_proxy_key_here'
export THU_LAB_PROXY_BASE_URL='https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1'Windows PowerShell:
$env:THU_LAB_PROXY_API_KEY='your_proxy_key_here'
$env:THU_LAB_PROXY_BASE_URL='https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1'You can also launch the agent and paste the key when prompted. The agent saves it into a per-user global config file:
Linux and macOS:
~/.thu-cybercraze-agent/.envWindows:
%USERPROFILE%\.thu-cybercraze-agent\.env
1.2 Run the agent
Linux (built binary):
./dist/thu-agentWindows (built on Windows):
.\dist\thu-agent.exemacOS (run Python directly):
python3 agent.py1.3 Build the binaries (if needed)
Linux build:
bash build_agent.shResult:
dist/thu-agentWindows build (run on Windows, not inside WSL):
py -3 -m pip install pyinstaller
powershell -ExecutionPolicy Bypass -File .\build_agent_windows.ps1Result:
dist\thu-agent.exe1.4 Optional: Run globally
Linux:
sudo install -m 755 dist/thu-agent /usr/local/bin/thu-agentWindows: add the repo dist directory to PATH, or copy the .exe into a directory already on PATH.
Example (PowerShell):
[Environment]::SetEnvironmentVariable(
"Path",
$env:Path + ";C:\Users\USER\Downloads\THU-deepseek-glm-api-mcp-server\dist",
"User"
)Open a new terminal and run:
thu-agent.exe2. Usage
Start the agent:
./dist/thu-agentOr run with Python:
python3 agent.pyPass model and key directly if you want:
python3 agent.py --model deepseek-v3.2 --api-key "$THU_LAB_PROXY_API_KEY"Default model:
deepseek-v3.2Current models:
qwen3-max-thinkingqwen3-maxglm-5glm-5-thinkingglm-4.7-thinkingkimi-k2.5kimi-k2.5-thinkingminimax-m2.5minimax-m2.5-thinkingqwen3.5-plusqwen3.5-plus-thinkingqwen3.5-minideepseek-v3.2-thinkingdeepseek-v3.2
While the agent is thinking or running a command, press Ctrl+C to interrupt. It will ask for a follow-up instruction. Type /stop there to discard the interrupted turn, or type a new instruction to continue.
3. Function List
Slash commands available:
/help/save [name]/autosave/context/compact [keep]/clear/status/attach <path> [instruction]/stop/sessions/load <id|name>/fork <id|name> [new-name]/new [name]/delete <id|name>/update/model/key/pwd/alwaysRun/exit
4. Function Explanation
Session and memory:
/save [name]saves the current session to disk. Sessions are manual-save by default./autosavetoggles automatic saving for this session./sessionslists saved sessions with an ID, summary, and last-used time./load <id|name>loads a saved session./fork <id|name> [new-name]creates a new session from a saved one./new [name]starts a new session with a fresh context./delete <id|name>deletes a saved session.
Context management:
/contextrefreshes and displays the startup context snapshot (date, git status, nearby memory files likeAGENTS.mdorCLAUDE.md)./compact [keep]summarizes older messages and keeps recent turns to reduce context size./clearclears in-memory conversation while preserving current project context./statusshows version, model, session name, autosave state, message count, and context size.
Commands and execution:
/alwaysRuntoggles auto-approval for shell commands./stopis used only after an interrupt prompt to discard the interrupted turn.
Attachments:
/attach path/to/file.txt explain this fileinlines small text/code files into the next model turn.Non-text files are passed as file references for the agent to inspect with commands.
Image files can be sent as multimodal content only when the selected model/proxy supports it and
THU_AGENT_MULTIMODAL=1is set. Otherwise they are treated as file references.
Models and keys:
/modelreselects the model (this resets the conversation context)./keyupdates the API key and saves it to the global.env.
Updates:
At startup, the agent compares its embedded version with the GitHub
VERSIONfile and reminds you if it is behind./updateclones the GitHub repo to a temp directory, rebuilds the binary, installs it to the current executable path (or/usr/local/bin/thu-agenton Linux), then removes the temp clone. On Windows it stages a post-exit replacement of the running.exe.
Other:
/pwdprints the current working directory./helpshows the command list./exitquits the agent.
Notes:
The MCP server in
server.pyis separate from the interactive agent inagent.py.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/cybercrazetech/THU-deepseek-glm-api-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server