THU Agent by CyberCraze
An interactive coding agent and MCP server that provides access to various AI models via the Tsinghua University lab proxy API. It enables users to inspect files and execute shell commands within their local directory using models like DeepSeek, GLM, and Qwen.
README
NO RATE LIMIT FOR THU STUDENT!! THU Agent by CyberCraze
Interactive terminal coding agent powered by the THU lab proxy OpenAI-compatible API.
The agent runs in your current terminal, works in your current directory, can inspect files, propose shell commands, and wait for your approval before running them.
Platform Use
Linux
Use the built executable:
./dist/thu-agent
Linux executable path:
dist/thu-agent
To run it globally, copy or symlink it into a directory on your PATH, for example:
sudo install -m 755 dist/thu-agent /usr/local/bin/thu-agent
Then run:
thu-agent
Windows
Use the Windows executable after building it on Windows:
.\dist\thu-agent.exe
Windows executable path:
dist\thu-agent.exe
To run it globally on Windows, add the repo dist directory to your PATH, or copy the executable into a directory already on PATH.
Example PowerShell command to add the current repo dist directory for your user:
[Environment]::SetEnvironmentVariable(
"Path",
$env:Path + ";C:\Users\USER\Downloads\THU-deepseek-glm-api-mcp-server\dist",
"User"
)
Then open a new terminal and run:
thu-agent.exe
Build it from Windows with:
powershell -ExecutionPolicy Bypass -File .\build_agent_windows.ps1
macOS
There is no packaged macOS binary in this repo.
Run the Python entrypoint directly:
python3 agent.py
If you want a global command on macOS, create a small wrapper in /usr/local/bin or another directory on your PATH:
sudo ln -sf "/absolute/path/to/agent.py" /usr/local/bin/thu-agent.py
or run the repo-local command directly from a shell alias.
API Setup
The agent uses the THU lab proxy.
Create an API key first at:
https://lab.cs.tsinghua.edu.cn/ai-platform/c/new
Base URL:
https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1
Set your key with an environment variable:
export THU_LAB_PROXY_API_KEY='your_proxy_key_here'
export THU_LAB_PROXY_BASE_URL='https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1'
On Windows PowerShell:
$env:THU_LAB_PROXY_API_KEY='your_proxy_key_here'
$env:THU_LAB_PROXY_BASE_URL='https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1'
You can also launch the agent and paste the key when prompted. The agent saves it into a per-user global config file for reuse.
Config location:
- Linux and macOS:
~/.thu-cybercraze-agent/.env - Windows:
%USERPROFILE%\.thu-cybercraze-agent\.env
Start the Agent
From the repo root:
./dist/thu-agent
Or with Python:
python3 agent.py
You can also pass the model and key directly:
python3 agent.py --model deepseek-v3.2 --api-key "$THU_LAB_PROXY_API_KEY"
Model Selection
The startup picker shows the models currently wired into the agent.
Default model:
deepseek-v3.2
Current supported models:
qwen3-max-thinkingqwen3-maxglm-5glm-5-thinkingglm-4.7-thinkingkimi-k2.5kimi-k2.5-thinkingminimax-m2.5minimax-m2.5-thinkingqwen3.5-plusqwen3.5-plus-thinkingqwen3.5-minideepseek-v3.2-thinkingdeepseek-v3.2
In-Agent Commands
Slash commands available in the session:
/help/model/key/pwd/alwaysRun/exit
While the agent is thinking or running a command, press Ctrl+C to cancel the current operation and return to the prompt without exiting the whole session.
Typical Workflow
- Start the agent.
- Choose a model or press Enter for the default.
- Reuse the saved API key or paste a new one.
- Type requests at the
>prompt. - Approve commands when the agent asks.
Example prompts:
list the files in this directorywrite a hello world script in pythoninspect this project and explain how to run itcreate a small bash script that prints the current date
Command Approval
By default, the agent asks before running each command.
To auto-approve commands for the current session:
/alwaysRun
Use that carefully.
Build
Linux build
bash build_agent.sh
Result:
dist/thu-agent
This build uses the current Python environment and PyInstaller, with extra excludes plus strip/optimize enabled to keep the binary smaller.
Windows build
Run this on Windows, not inside WSL:
py -3 -m pip install pyinstaller
powershell -ExecutionPolicy Bypass -File .\build_agent_windows.ps1
Result:
dist\thu-agent.exe
macOS run path
macOS users should run the Python entrypoint directly:
python3 agent.py
Direct API Test
You can test the proxy directly:
curl --location --request POST \
'https://lab.cs.tsinghua.edu.cn/ai-platform/api/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header "authorization: Bearer $THU_LAB_PROXY_API_KEY" \
--data-raw '{
"model": "deepseek-v3.2",
"messages": [{"role": "user", "content": "Reply with exactly: ok"}],
"temperature": 0.2,
"repetition_penalty": 1.1,
"stream": false
}'
Notes
- The Linux binary is already buildable from this repo.
- The Windows
.exemust be built from a Windows Python environment. - macOS users should run
agent.pydirectly unless they package it themselves. - The MCP server code in
server.pystill uses the older backend and is separate from the interactive agent inagent.py.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.