FEGIS
FEGIS is a Model Context Protocol server that gives LLMs structured, persistent and portable memory through customizable cognitive tools defined in schema.
README
FEGIS
FEGIS is a runtime framework for structured cognition and persistent memory in language models built with Anthropic's Model Context Protocol. It allows schema-defined cognitive modes to be dynamically registered, invoked, and stored as structured memory using vector embeddings and semantic context. Think: programmable thinking tools with recallable memory.
FEGIS is not a cognitive system — it's the foundation for building your own.
Key Capabilities
- Schema-Defined Cognition: Define custom cognitive modes in YAML with structured fields and metadata
- Persistent Memory: Store cognitive artifacts with full provenance (mode, UUID, timestamp, metadata) providing breadcrumbs to traverse cognitive history
- Semantic Retrieval: Search for previous thoughts by content similarity or direct UUID lookup
- Vectorized Storage: Utilize embeddings for efficient semantic search across cognitive artifacts
- Model-Agnostic Format: Your cognitive artifacts persist across different models and sessions
What FEGIS Is
FEGIS is:
- A runtime system for defining and executing schema-based thinking tools
- A way to store structured cognitive artifacts with semantic and relational metadata
- A vectorized memory system with built-in retrieval and recall
- A way to create agents whose thinking can reference, reflect on, and build upon prior cognitive artifacts
- A system where you own and host your memory — everything is local, inspectable, and portable
- A model-agnostic format — your memory persists across different models and future releases
Over time, FEGIS helps build a personal Cognitive Archive — a persistent, structured body of thought that can be searched, retrieved, extended, and carried forward across models, sessions, and time.
Architecture
FEGIS consists of several key components:
- Archetype Definitions: YAML files that define cognitive modes and their structure
- FastMCP Server: Exposes cognitive tools to compatible LLM clients
- Qdrant Vector Database: Stores and indexes cognitive artifacts for semantic retrieval
- Dynamic Tool Registration: Creates MCP tools from archetype definitions at runtime
Quickstart
1. Install uv and clone the repo
# Install uv (modern Python package/runtime manager)
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
winget install --id=astral-sh.uv -e
# Clone the repo
git clone https://github.com/p-funk/FEGIS.git
2. Install and start Qdrant
Make sure Docker is installed and running:
docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest
If you need to install Docker:
3. Configure Claude Desktop
Create or edit the Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Paste the following, and replace the placeholder path with the full path to your local FEGIS clone:
{
"mcpServers": {
"mcp-fegis-server": {
"command": "uv",
"args": [
"--directory",
"<FEGIS_PATH>",
"run",
"fegis"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"QDRANT_GRPC_PORT": "6334",
"QDRANT_PREFER_GRPC": "true",
"QDRANT_API_KEY": "",
"COLLECTION_NAME": "cognitive_archive",
"FAST_EMBED_MODEL": "nomic-ai/nomic-embed-text-v1.5",
"CONFIG_PATH": "<FEGIS_PATH>/archetypes/example.yaml"
}
}
}
}
Creating Custom Archetypes
FEGIS is fundamentally a framework for implementing cognitive architectures. The example archetype provided is just one possible configuration focusing on introspective thought processes.
You can create your own custom archetypes by:
- Creating a new YAML file in the
archetypesdirectory - Defining your own cognitive modes, fields, and facets
- Updating the
CONFIG_PATHin the Claude Desktop configuration
For detailed guidance on designing effective archetypes, see Effective FEGIS Archetype Design.
For example, you could create archetypes for:
- Problem-solving processes
- Creative workflows
- Analytical thinking frameworks
- Domain-specific reasoning patterns
Using FEGIS Tools
FEGIS tools are made available to the model at runtime, but they are not used automatically.
Tool Priming
To encourage a model to use the cognitive tools, you need to prime it with appropriate instructions. For example:
Throughout our conversation, use your tools naturally and fluidly.
Feel free to reflect, introspect, stay aware, have an innermonologue
or use memory to recall past insights as needed. You can search past
thoughts using `search_memories`, or revisit specific artifacts with
`retrieve_memory`.
Memory Usage
The memory system allows for:
- Semantic Search: Find cognitive artifacts based on content similarity
- Direct Retrieval: Look up specific artifacts by their UUID
- Persistent Storage: Artifacts remain available across sessions and models
License
Licensed under the PolyForm Noncommercial License 1.0.0.
- Free for personal and non-commercial use
- Commercial license required for resale, integrations, or hosted services
Contact goldenp@ptology.com for commercial licensing.
Support
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.