
Interactive Feedback MCP
Provides interactive user feedback capabilities for AI assistants, helping reduce excessive tool calls by prompting users for feedback before completing tasks.
README
Prompt Engineering
For the best results, add the following to your custom prompt in your AI assistant, you should add it on a rule or directly in the prompt (e.g., Cursor):
Whenever you want to ask a question, always call the MCP
interactive_feedback
.
Whenever you’re about to complete a user request, call the MCPinteractive_feedback
instead of simply ending the process. Keep calling MCP until the user’s feedback is empty, then end the request.
This will ensure your AI assistant uses this MCP server to request user feedback before marking the task as completed.
💡 Why Use This?
By guiding the assistant to check in with the user instead of branching out into speculative, high-cost tool calls, this module can drastically reduce the number of premium requests (e.g., OpenAI tool invocations) on platforms like Cursor. In some cases, it helps consolidate what would be up to 25 tool calls into a single, feedback-aware request — saving resources and improving performance.
Configuration
This MCP server uses Qt's QSettings
to store configuration on a per-project basis. This includes:
- The command to run.
- Whether to execute the command automatically on the next startup for that project (see "Execute automatically on next run" checkbox).
- The visibility state (shown/hidden) of the command section (this is saved immediately when toggled).
- Window geometry and state (general UI preferences).
These settings are typically stored in platform-specific locations (e.g., registry on Windows, plist files on macOS, configuration files in ~/.config
or ~/.local/share
on Linux) under an organization name "FabioFerreira" and application name "InteractiveFeedbackMCP", with a unique group for each project directory.
The "Save Configuration" button in the UI primarily saves the current command typed into the command input field and the state of the "Execute automatically on next run" checkbox for the active project. The visibility of the command section is saved automatically when you toggle it. General window size and position are saved when the application closes.
Installation (Cursor)
- Prerequisites:
- Python 3.11 or newer.
- uv (Python package manager). Install it with:
- Windows:
pip install uv
- Linux/Mac:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Windows:
- Get the code:
- Clone this repository:
git clone https://github.com/noopstudios/interactive-feedback-mcp.git
- Or download the source code.
- Clone this repository:
- Navigate to the directory:
cd path/to/interactive-feedback-mcp
- Install dependencies:
uv sync
(this creates a virtual environment and installs packages)
- Run the MCP Server:
uv run server.py
- Configure in Cursor:
-
Cursor typically allows specifying custom MCP servers in its settings. You'll need to point Cursor to this running server. The exact mechanism might vary, so consult Cursor's documentation for adding custom MCPs.
-
Manual Configuration (e.g., via
mcp.json
) Remember to change the/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp
path to the actual path where you cloned the repository on your system.{ "mcpServers": { "interactive-feedback-mcp": { "command": "uv", "args": [ "--directory", "/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp", "run", "server.py" ], "timeout": 600, "autoApprove": [ "interactive_feedback" ] } } }
-
You might use a server identifier like
interactive-feedback-mcp
when configuring it in Cursor.
-
For Cline / Windsurf
Similar setup principles apply. You would configure the server command (e.g., uv run server.py
with the correct --directory
argument pointing to the project directory) in the respective tool's MCP settings, using interactive-feedback-mcp
as the server identifier.
Development
To run the server in development mode with a web interface for testing:
uv run fastmcp dev server.py
This will open a web interface and allow you to interact with the MCP tools for testing.
Available tools
Here's an example of how the AI assistant would call the interactive_feedback
tool:
<use_mcp_tool>
<server_name>interactive-feedback-mcp</server_name>
<tool_name>interactive_feedback</tool_name>
<arguments>
{
"project_directory": "/path/to/your/project",
"summary": "I've implemented the changes you requested and refactored the main module."
}
</arguments>
</use_mcp_tool>
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.