feedback-loop-mcp
feedback-loop-mcp
README
Feedback Loop MCP
Simple MCP Server to enable a human-in-the-loop workflow in AI-assisted development tools like Cursor. This server allows you to run commands, view their output, and provide textual feedback directly to the AI. It is also compatible with Cline and Windsurf.
Inspiration: This project is inspired by interactive-feedback-mcp by Fábio Ferreira (@fabiomlferreira).
Features
- Cross-platform: Works on macOS, Windows, and Linux
- Interactive UI: Modern, responsive interface for collecting feedback
- Settings persistence: Save and restore UI preferences per project
- MCP integration: Seamlessly integrates with MCP-compatible AI assistants
- macOS overlay support: Native overlay window support on macOS
Screenshot

The feedback collection interface with macOS vibrancy effects
Installation
Quick Start with npx (Recommended)
The easiest way to use this MCP server is via npx:
npx feedback-loop-mcp
Global Installation
For frequent use, install globally:
npm install -g feedback-loop-mcp
feedback-loop-mcp
Local Development Setup
For development or customization:
-
Clone the repository:
git clone <repository-url> cd feedback-loop-mcp -
Install dependencies:
npm install -
Run in development mode:
npm run dev
MCP Server Configuration
Cursor IDE
Add the following configuration to your Cursor settings (mcp.json):
{
"mcpServers": {
"feedback-loop-mcp": {
"command": "npx",
"args": ["feedback-loop-mcp"],
"timeout": 600,
"autoApprove": [
"feedback_loop"
]
}
}
}
Cline / Windsurf
Similar setup principles apply. Configure the server command in your MCP settings:
{
"mcpServers": {
"feedback-loop-mcp": {
"command": "npx",
"args": ["feedback-loop-mcp"]
}
}
}
Claude Desktop
Add to your Claude Desktop configuration:
{
"mcpServers": {
"feedback-loop-mcp": {
"command": "npx",
"args": ["feedback-loop-mcp"]
}
}
}
Usage
Running the Server
Via npx (Recommended)
npx feedback-loop-mcp
Via Global Installation
feedback-loop-mcp
Local Development
npm start
Command Line Arguments
The application accepts the following command-line arguments:
--project-directory <path>: Set the project directory--prompt <text>: Set the initial prompt/summary text
Example:
npm start -- --project-directory "/path/to/project" --prompt "Please review this code"
Available Tools
The MCP server provides the following tool:
feedback_loop: Displays a UI for collecting user feedback and returns the response
Example usage in AI assistants:
{
"tool_name": "feedback_loop",
"arguments": {
"project_directory": "/path/to/your/project",
"summary": "I've implemented the changes you requested and refactored the main module."
}
}
Prompt Engineering
For the best results, add the following to your custom prompt in your AI assistant:
Whenever you want to ask a question, always call the MCP feedback_loop tool.
Whenever you're about to complete a user request, call the MCP feedback_loop tool instead of simply ending the process.
Keep calling the feedback_loop tool until the user's feedback is empty, then end the request.
This ensures your AI assistant uses this MCP server to request user feedback before marking tasks as completed.
Benefits
By guiding the assistant to check in with the user instead of branching out into speculative, high-cost tool calls, this module can drastically reduce the number of premium requests (e.g., OpenAI tool invocations) on platforms like Cursor. In some cases, it helps consolidate what would be up to 25 tool calls into a single, feedback-aware request — saving resources and improving performance.
Built applications will be available in the dist directory.
Project Structure
feedback-loop-mcp/
├── main.js # Main Electron process
├── preload.js # Preload script for secure IPC
├── package.json # Project configuration
├── README.md # This file
├── assets/ # Static assets
│ └── feedback.png # Application icon
├── renderer/ # Renderer process files
│ ├── index.html # Main UI
│ ├── styles.css # Styling
│ └── renderer.js # UI logic
└── server/ # MCP server
└── mcp-server.js # Node.js MCP server
Configuration
The application automatically saves settings using Electron's built-in storage:
- General settings: Window size, position, and UI preferences
- Project-specific settings: Command history and project-specific configurations
Settings are stored in the standard application data directory for each platform.
Features Overview
Feedback Collection
- Rich text feedback input
- Automatic saving of feedback
- JSON output format for easy integration
- Timestamp and project information included
Development
For development and build information, see DEVELOPMENT.md.
Troubleshooting
Common Issues
- MCP server not connecting: Ensure the server is running and the configuration is correct
- npx command not found: Make sure Node.js and npm are properly installed
- Permission errors: On Unix systems, you may need to make the binary executable
Debug Mode
Run with debug output:
DEBUG=* npx feedback-loop-mcp
License
MIT License - see package.json for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.