Semantic Prompt MCP

Semantic Prompt MCP

Enables Claude to solve complex problems systematically by breaking them down into 3-4 structured thinking steps. Features document caching and multiple profiles including specialized modes for SuperClaude and SuperGemini frameworks.

Category
Visit Server

README

Semantic Prompt MCP

Version License

The Core Thinking Engine for SuperClaude Framework - An MCP server that helps Claude think systematically step-by-step

🎯 What is this?

Semantic Prompt MCP is a tool that helps Claude solve complex problems by breaking them down into 3-4 systematic thinking steps. Just like humans solve problems by "understanding first → choosing a method → executing", this makes Claude follow the same process.

Core Concepts

  • 🧠 Step-by-Step Thinking: Break complex problems into 3-4 manageable steps
  • 📚 Document Caching: Once read, documents are cached and referenced (performance optimization)
  • 🎨 Profile System: Different configuration files for different purposes

⚡ Quick Start

1. Run without Installation

npx semantic-prompt-mcp

2. Add to Claude Code

Add to .mcp.json in your project root:

{
  "mcpServers": {
    "semantic-prompt": {
      "command": "npx",
      "args": ["-y", "semantic-prompt-mcp@latest"],
      "env": {
        "CHAIN_OF_THOUGHT_CONFIG": "superclaude.json"
      }
    }
  }
}

🎭 Three Modes (Profiles)

1️⃣ default.json - Basic Mode

npx semantic-prompt-mcp  # or
npx semantic-prompt-mcp default.json
  • Purpose: General problem solving
  • Features: Flexible thinking process, simple 3-step structure
  • Best for: General tasks without special framework requirements

2️⃣ superclaude.json - SuperClaude Mode ⭐

npx semantic-prompt-mcp superclaude.json
  • Purpose: Use with SuperClaude Framework
  • Features:
    • 90% command selection enforcement (systematic execution)
    • Document duplicate read prevention (caching system)
    • 21 dedicated commands support
    • Quality Gates validation system
  • Best for: Required when using SuperClaude Framework

3️⃣ supergemini.json - SuperGemini Mode

npx semantic-prompt-mcp supergemini.json
  • Purpose: Use with SuperGemini Framework
  • Features:
    • 4-step structure (Analysis → TOML Command → Agent Selection → Execution)
    • Commands defined in TOML files
    • Multi-Agent system support
  • Best for: When using SuperGemini Framework

🔄 How It Works

SuperClaude Mode Example (3 Steps)

User: "Analyze security issues in this code"

Step 1️⃣ - Intent Analysis
Claude: "User wants security analysis. Related files are..."

Step 2️⃣ - Command Selection (90% enforced)
Claude: "Selecting 'analyze' command and reading analyze.md document"
System: Provides analyze.md content → Cache saved ✅

Step 3️⃣ - Execution Strategy
Claude: "Following document instructions to execute security analysis..."

🚀 Core Feature: Document Caching System

Documents are never read twice!

First Request:
Claude: "Selecting 'analyze' command"
System: Reads analyze.md → Cache saved ✅

Second Request:
Claude: "Selecting 'analyze' command"
System: "Already read. Refer to system-reminder" ⚡

This significantly reduces token usage and speeds up execution!

🎨 Creating Your Own Custom Profile

Step 1: Create your custom JSON file

Create a new file my-custom.json in any folder (e.g., your project root):

{
  "tool": {
    "name": "my_thinking",
    "description": "My custom thinking process..."
  },
  "config": {
    "availableCommands": ["analyze", "build", "test"],
    "commandPath": "./my-commands/",
    "commandPreference": 0.8
  }
}

Step 2: Use it in Claude Code

Update your .mcp.json:

{
  "mcpServers": {
    "semantic-prompt": {
      "command": "npx",
      "args": ["-y", "semantic-prompt-mcp@latest"],
      "env": {
        "CHAIN_OF_THOUGHT_CONFIG": "./my-custom.json"  // ← Just change this!
      }
    }
  }
}

That's it! Just change the filename in CHAIN_OF_THOUGHT_CONFIG:

Built-in profiles (no path needed):

  • "superclaude.json" - SuperClaude Framework
  • "supergemini.json" - SuperGemini Framework
  • "default.json" - Basic mode

Your custom profiles (need path):

  • "./my-custom.json" - File in your project root
  • "./config/my-profile.json" - File in a subfolder
  • "/absolute/path/to/profile.json" - Absolute path

Why the difference? Built-in profiles are packaged with npm, your files are in your project!

📁 Project Structure

semantic-prompt-mcp/
├── prompts/
│   ├── default.json      # Basic profile
│   ├── superclaude.json  # SuperClaude specific
│   └── supergemini.json  # SuperGemini specific
├── src/
│   └── index.ts          # Main server code
├── LICENSE              # MIT License
└── README.md            # This document

🤝 For Developers

Local Development Setup

git clone https://github.com/hyunjae-labs/semantic-prompt-mcp.git
cd semantic-prompt-mcp
npm install
npm run build
npm link  # For local testing

🔧 Troubleshooting

"Document already read" message appears

This is normal! Documents are cached for performance optimization.

Too many console logs

export DISABLE_THOUGHT_LOGGING=true

Cannot find specific command

Check your command path:

export CHAIN_OF_THOUGHT_COMMAND_PATH=/correct/path/to/commands/

📜 License & Attribution

This project is based on sequential-thinking MCP server.

License

MIT License - Free to use, modify, and distribute

Copyright Notice

  • Original Work: Copyright (c) Model Context Protocol Contributors (sequential-thinking)
  • This Work: Copyright (c) 2025 Hyunjae Lim (thecurrent.lim@gmail.com)

Major Changes from Original

  • Extended 3-step structure to adaptive 3-4 step structure
  • Added SuperClaude/SuperGemini Framework specific profiles
  • Implemented document caching system
  • Added meta-cognitive attention mechanisms
  • Implemented multi-profile system

🔗 Related Links

🚀 Version History

v1.3.0 (Current)

  • Added SuperClaude/SuperGemini profiles
  • Implemented document caching system
  • Added meta-cognitive attention mechanisms

v1.0.0

  • Initial release (based on sequential-thinking)

💡 Need Help?

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured