Openfort Model Context Protocol (MCP) Server
A plug-and-play solution that enhances AI assistants by enabling them to create projects, manage configurations, and interact with Openfort's wallet infrastructure through 42 specialized tools.
README
<h1> Openfort Model Context Protocol (MCP) Server</h1>
Openfort is an open source alternative to wallet infrastructure solutions, it supercharges your project with authentication, user management and payments.
Openfort’s MCP Server is a plug-and-play solution that enhances AI assistants by enabling them to create projects, manage configurations, and query data automatically when building applications on Openfort's infrastructure. Here's a guide on how to set it up
Features
- 🔨 42 Tools — A complete set of tools to interact with Openfort
- 🔐 Authentication — Directly authenticate from just plugging in the MCP
- 📄️ Initialize — Create new Openfort projects from the chat
- 🏗️ Scaffold — Build new apps from scratch using a single prompt
- 🔎 Context — Query the latest version of the documentation
- 💳️ Create — Generate wallets, users, contracts, and policies by just telling the LLM
Steps
- Install Openfort's MCP server.
- Add rules for the LLMs.
- Create a new project.
- Debug common issues.
- Discover all the capabilities.
1. Install Openfort's MCP server
This will allow your AI Assistant to interact with Openfort's tools on your behalf to create projects and manage them.
Ensure you have the following prerequisites:
Now, add it to your code editor. Based on your preferred tool, follow the instructions below:
Cursor
To integrate our MCP Server with Cursor you can either:
One-click installation
<AddToCursor />
Edit the ~/.cursor/mcp.json
You can look it up on your system or find it under the Tools & Integrations tab in your Cursor Settings. Fill it with the following content:
{
"mcpServers": {
"openfort-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.openfort.io/sse"
]
}
}
}
Then you should see the Openfort MCP server listed on your Tools & Integrations tab without the need to restart. The authentication will trigger automatically.
Windsurf
For integration with Windsurf, replace the contents of the ~/.codeium/windsurf/mcp_config.json file with the following. It can be located at:
Windsurf Settings > Cascade > Plugins (MCP Servers) > View Raw Config
For MacOS/Linux
{
"mcpServers": {
"openfort-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.openfort.io/sse"
],
"disabled": false
}
}
}
For windows
{
"mcpServers": {
"openfort-mcp": {
"command": "cmd",
"args": [
"/c",
"npx",
"mcp-remote",
"https://mcp.openfort.io/sse"
],
"disabled": false
}
}
}
Visual Studio Code
To integrate an MCP Server into VS Code for use with GitHub Copilot, you should edit the .vscode/mcp.json file or run the MCP: Open User Configuration command which opens the file to add the following content:
{
"servers": {
"openfort-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.openfort.io/sse"
]
}
}
}
Claude Desktop
To add our MCP Server to Claude Desktop, click on Edit Config in the Developer tab under Settings to automatically create a file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Once created, fill it with the following content:
{
"mcpServers": {
"openfort-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.openfort.io/sse"
]
}
}
}
You will need to restart Claude Desktop after modifying the configuration file. Also, if you freshly installed Node.js you may need to reboot your computer too.
2. Add rules for the LLMs
LLMs tend to forget about the availability of tools and can make some pathological mistakes. Therefore, it's a good idea to include rules to remind them about this. Copy the ones here and include them in your editor.
3. Create a new project
Now create a new project in your editor and type up a prompt to the LLM to scaffold it for you. When needed, the LLM will automatically call the available tools on Openfort's MCP Server, enhancing your developer experience. You can find an example of a prompt in the official documentation.
4. Debug Common Issues
Here are some common issues you might encounter and how to resolve them.
<details> <summary>General Errors</summary> <br>
After the agent finishes creating a project, it may still throw errors, even with extended context. This is especially common for complex prompts or large applications.
Don’t expect the AI to flawlessly generate entire applications in a single prompt without any issues.
To resolve these errors, fix them manually or ask the AI for help. Iteration is normal, review the output, make corrections, and continue prompting as needed.
</details>
<details> <summary>Loop When Creating a Policy</summary> <br>
Occasionally, the AI agent may get stuck in a loop while creating a policy. The policy is successfully created, but the agent repeatedly attempts to update it with the same values.
The cause is unknown, and the effect is harmless.
To fix this, simply cancel the generation and prompt the agent to continue with the next step.
</details>
<details> <summary>npm Error: Missing script: "dev"</summary> <br>
If the AI agent fails to start the project using npm run dev, it’s often because it created the project in a subfolder and didn’t change into that directory before running the command.
Manually navigate to the subfolder and run the project again. </details>
<details> <summary>No Permission to Edit the .env File</summary> <br>
When the AI agent fails to edit or create a .env file with your project keys, it’s usually due to insufficient file permissions.
In Cursor, add a
.cursorignorefile with!.envto explicitly allow the AI to edit the.envfile.
For other editors or environments, follow an equivalent approach to ensure the file is not ignored.
</details>
5. Discover all the capabilities
For more information on the available tools and how to use them, check out the MCP Server documentation.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.