element-plus-mcp
element-plush mcp server
dweb-channel
README
element-plus-mcp
element-plush mcp server
基本使用
生成Element Plus组件
# 发送请求生成组件
curl -X POST http://localhost:3000/api/mcp/generate \
-H "Content-Type: application/json" \
-d '{"userPrompt": "创建一个带搜索功能的表格组件"}'
配置使用不同的大模型
# 使用OpenAI的GPT-4模型
curl -X POST http://localhost:3000/api/mcp/generate \
-H "Content-Type: application/json" \
-d '{
"userPrompt": "创建一个日期选择器组件",
"llmConfig": {
"modelType": "openai",
"modelName": "gpt-4",
"temperature": 0.8
}
}'
获取支持的模型列表
curl -X GET http://localhost:3000/api/mcp/models
MCP协议使用 (Model Context Protocol)
1. 通过HTTP API使用MCP
调用MCP工具生成组件
const response = await fetch('/api/mcp-protocol/mcp', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
jsonrpc: '2.0',
method: 'mcp/callTool',
params: {
name: 'generate-component',
args: {
description: '创建一个带搜索和分页的表格组件',
componentType: '表格',
stylePreference: '现代简约风格'
}
},
id: 1
})
});
const result = await response.json();
console.log(result.result.content[0].text);
使用资源API获取组件信息
const response = await fetch('/api/mcp-protocol/mcp', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
jsonrpc: '2.0',
method: 'mcp/readResource',
params: {
uri: '/element-plus/components/ElButton'
},
id: 1
})
});
const result = await response.json();
console.log(result.result.contents[0].text);
2. 使用SSE连接(流式响应)
// 建立SSE连接
const eventSource = new EventSource('/api/mcp-protocol/sse');
// 接收消息
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('收到MCP消息:', data);
};
// 关闭连接
function closeConnection() {
eventSource.close();
}
3. 使用提示模板
// 通过模板生成组件
const response = await fetch('/api/mcp-protocol/use-prompt-template', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
templateName: 'element-plus-component-generation',
variables: {
description: '创建一个用户管理表单',
componentType: '表单',
featuresStr: '表单验证,自适应布局,暗色主题支持'
}
})
});
const result = await response.json();
console.log(result);
环境配置
项目支持通过.env文件配置各种大模型的API密钥:
# DeepSeek(默认模型)
DEEPSEEK_API_URL=https://api.deepseek.com
DEEPSEEK_API_KEY=your_deepseek_key
# OpenAI
OPENAI_API_URL=https://api.openai.com/v1/chat/completions
OPENAI_API_KEY=your_openai_key
# Anthropic
ANTHROPIC_API_URL=https://api.anthropic.com/v1/messages
ANTHROPIC_API_KEY=your_anthropic_key
# Google Gemini
GEMINI_API_URL=https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent
GEMINI_API_KEY=your_gemini_key
集成到前端项目
如果要在前端项目中集成本服务,可以通过以下方式:
// 调用MCP协议的标准方法
async function callMcpTool(toolName, args) {
const response = await fetch('http://localhost:3000/api/mcp-protocol/mcp', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
jsonrpc: '2.0',
method: 'mcp/callTool',
params: {
name: toolName,
args: args
},
id: Date.now()
})
});
const result = await response.json();
if (result.error) {
throw new Error(`MCP调用错误: ${result.error.message}`);
}
return result.result;
}
// 使用示例
async function generateComponent(description) {
const result = await callMcpTool('generate-component', {
description,
componentType: '表格'
});
// 解析组件中的文本内容
const componentData = JSON.parse(result.content[0].text);
return componentData;
}
🧩 MCP Server 实现概览
1. 🎯 优化 LLM API 用量
批量请求与缓存策略:使用 LRU 缓存组件描述、补全结果,避免重复调用。
Prompt 精简与上下文控制:仅传递必要上下文(如 props 类型、组件结构)。
并发控制:使用任务队列控制并发,防止 token 超限或速率限制。
2. 🔍 LLM 组件筛选
组件解析:扫描 Element Plus 的组件库。
提取:组件名、Props 类型、支持的插槽(slots)。
LLM 过滤:根据输入需求(如“选择组件用于上传头像”),生成 Prompt 让 LLM 选择适配组件。
结果结构:
{
"component": "ElUpload",
"reason": "支持上传,包含头像预览和文件选择控制"
}
3. 🖼️ 独立 Code Preview 服务
服务目标:渲染 LLM 生成的 Vue 组件,提供 iframe 或沙盒 iframe 预览。
实现方式:
使用 Vite + Vue 构建预览容器
接收 SFC 内容并动态渲染
提供 POST API /preview 传入组件源码,返回 iframe 地址或 HTML
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor
Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.
Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.