MCP Pinecone Vector Database Server
zx8086
README
MCP Pinecone Vector Database Server
This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.
Features
- Search for similar documents using text queries
- Add new vectors to the database with custom metadata
- Process and upload Confluence data in batch
- Delete vectors by ID
- Basic database statistics (temporarily disabled)
Prerequisites
- Bun runtime
- Pinecone API key
- OpenAI API key (for generating embeddings)
Installation
-
Clone this repository
-
Install dependencies:
bun install -
Create a
.envfile with the following content:PINECONE_API_KEY=your-pinecone-api-key OPENAI_API_KEY=your-openai-api-key PINECONE_HOST=your-pinecone-host PINECONE_INDEX_NAME=your-index-name DEFAULT_NAMESPACE=your-namespace
Usage
Running the MCP Server
Start the server:
bun src/index.ts
The server will start and listen for MCP commands via stdio.
Running the Example Client
Test the server with the example client:
bun examples/client.ts
Processing Confluence Data
The Confluence processing script provides detailed logging and verification:
bun src/scripts/process-confluence.ts <file-path> [collection] [scope]
Parameters:
file-path: Path to your Confluence JSON file (required)collection: Document collection name (defaults to "documentation")scope: Document scope (defaults to "documentation")
Example:
bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"
The script will:
- Validate input parameters
- Process and vectorize the content
- Upload vectors in batches
- Verify successful upload
- Provide detailed logs of the process
Available Tools
The server provides the following tools:
-
search-vectors- Search for similar documents with parameters:- query: string (search query text)
- topK: number (1-100, default: 5)
- filter: object (optional filter criteria)
-
add-vector- Add a single document with parameters:- text: string (content to vectorize)
- metadata: object (vector metadata)
- id: string (optional custom ID)
-
process-confluence- Process Confluence JSON data with parameters:- filePath: string (path to JSON file)
- namespace: string (optional, defaults to "capella-document-search")
-
delete-vectors- Delete vectors with parameters:- ids: string[] (list of vector IDs)
- namespace: string (optional, defaults to "capella-document-search")
-
get-stats- Get database statistics (temporarily disabled)
Database Configuration
The server requires a Pinecone vector database. Configure the connection details in your .env file:
PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace
Metadata Schema
Confluence Documents
ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...
Contributing
- Fork the repository
- Create your feature branch:
git checkout -b feature/my-new-feature - Commit your changes:
git commit -am 'Add some feature' - Push to the branch:
git push origin feature/my-new-feature - Submit a pull request
License
MIT
Recommended Servers
mixpanel
Connect to your Mixpanel data. Query events, retention, and funnel data from Mixpanel analytics.
Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.
MCP PubMed Search
Server to search PubMed (PubMed is a free, online database that allows users to search for biomedical and life sciences literature). I have created on a day MCP came out but was on vacation, I saw someone post similar server in your DB, but figured to post mine.
dbt Semantic Layer MCP Server
A server that enables querying the dbt Semantic Layer through natural language conversations with Claude Desktop and other AI assistants, allowing users to discover metrics, create queries, analyze data, and visualize results.
Crypto Price & Market Analysis MCP Server
A Model Context Protocol (MCP) server that provides comprehensive cryptocurrency analysis using the CoinCap API. This server offers real-time price data, market analysis, and historical trends through an easy-to-use interface.
Nefino MCP Server
Provides large language models with access to news and information about renewable energy projects in Germany, allowing filtering by location, topic (solar, wind, hydrogen), and date range.
Vectorize
Vectorize MCP server for advanced retrieval, Private Deep Research, Anything-to-Markdown file extraction and text chunking.
MATLAB MCP Server
Integrates MATLAB with AI to execute code, generate scripts from natural language, and access MATLAB documentation seamlessly.
Macrostrat MCP Server
Enables Claude to query comprehensive geologic data from the Macrostrat API, including geologic units, columns, minerals, and timescales through natural language.
MCP Word Counter
A Model Context Protocol server that provides tools for analyzing text documents, including counting words and characters. This server helps LLMs perform text analysis tasks by exposing simple document statistics functionality.