
File Context MCP
This server provides an API to query Large Language Models using context from local files, supporting various models and file types for context-aware responses.
compiledwithproblems
README
File Context MCP (Model Context Processor)
Overview
File Context MCP is a TypeScript-based application that provides an API for querying Large Language Models (LLMs) with context from local files. It supports multiple LLM providers (Ollama and Together.ai) and can process various file types to generate context-aware responses.
Core Features
1. File System Navigation
- Dynamic file and directory traversal
- Support for multiple file types (
.txt
,.md
,.ts
,.json
, etc.) - Safe path handling with sanitization
import path from 'path';
export const fileUtils = {
isTextFile(filePath: string): boolean {
const textExtensions = [
'.txt', '.md', '.js', '.ts', '.json', '.yaml', '.yml',
'.html', '.css', '.csv', '.xml', '.log', '.env',
'.jsx', '.tsx', '.py', '.java', '.cpp', '.c', '.h'
];
return textExtensions.includes(path.extname(filePath).toLowerCase());
},
2. Context Processing
- Intelligent context formatting for LLM queries
- Context truncation to handle large files
- File content aggregation for directory queries
export const promptUtils = {
formatContextPrompt(context: string, query: string): string {
return `
You are an AI assistant analyzing the following content:
---BEGIN CONTEXT---
${context}
---END CONTEXT---
Please respond to the following query:
${query}
Base your response only on the information provided in the context above.
`;
},
truncateContext(context: string, maxLength: number = 4000): string {
if (context.length <= maxLength) return context;
// Try to truncate at a natural break point
const truncated = context.slice(0, maxLength);
const lastNewline = truncated.lastIndexOf('\n');
if (lastNewline > maxLength * 0.8) {
return truncated.slice(0, lastNewline) + '\n... (truncated)';
}
return truncated + '... (truncated)';
}
};
3. Multi-Model Support
- Ollama (local) integration
- Together.ai (cloud) integration
- Extensible model interface design
export interface LLMResponse {
text: string;
model: string;
error?: string;
}
export class ModelInterface {
async queryOllama(prompt: string, context: string): Promise<LLMResponse> {
try {
const response = await axios.post(`${config.ollamaBaseUrl}/api/generate`, {
model: config.modelName,
prompt: this.formatPrompt(prompt, context),
stream: false
});
return {
if (!response.data || !response.data.response) {
throw new Error('Invalid response from Ollama');
}
} catch (error) {
return {
text: response.data.response,
model: 'ollama'
};
} catch (error) {
console.error('Ollama error:', error);
return {
text: '',
model: 'ollama',
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
model: config.modelName,
async queryTogether(prompt: string, context: string): Promise<LLMResponse> {
try {
const response = await axios.post(
'https://api.together.xyz/inference',
{
model: config.modelName,
prompt: this.formatPrompt(prompt, context),
max_tokens: 512,
},
{
headers: {
Authorization: `Bearer ${config.togetherApiKey}`
}
}
);
return {
return {
text: response.data.output.text,
model: 'together'
};
} catch (error) {
return {
text: '',
model: 'together',
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
private formatPrompt(prompt: string, context: string): string {
return `Context: ${context}\n\nQuestion: ${prompt}`;
}
}
Architecture
Core Components
-
Server (server.ts)
- Express.js REST API implementation
- File upload/delete handling with multer
- Request validation and routing
- OpenAPI/Swagger integration
-
FileSystemTools (core/fileSystem.ts)
- File and directory operations
- Content reading and parsing
- Directory traversal
- Secure file deletion
- Error handling for file operations
-
ModelInterface (core/modelInterface.ts)
- Multiple LLM provider support (Ollama, Together.ai)
- Response formatting and error handling
- Configurable model parameters
- Unified query interface
-
Utility Modules
fileUtils
: File type detection, path sanitization, size formattingpromptUtils
: Context formatting, intelligent truncationvalidators
: Path, query, and model validationlogger
: Structured logging with levels
-
Configuration (config/config.ts)
- Environment variable management
- API keys and endpoints
- Model configuration
- Server settings
-
API Specification (resources/file-context-api.yml)
- OpenAPI 3.0 documentation
- Request/response schemas
- Endpoint documentation
- Error response definitions
API Endpoints
1. List Files
GET /api/files
Query params:
- path: string (optional, defaults to './')
Response:
- Array of FileInfo objects with file/directory details
2. Upload File
POST /api/files/upload
Content-Type: multipart/form-data
Body:
- file: File (must be a text file, max 5MB)
Response:
{
"message": "File uploaded successfully",
"file": {
"name": string,
"size": string,
"path": string
}
}
3. Delete File
DELETE /api/files/{filename}
Params:
- filename: string (name of file to delete)
Response:
{
"message": "File deleted successfully"
}
4. Query with Context
POST /api/query
Body:
{
"path": string,
"query": string,
"model": "ollama" | "together"
}
Response:
{
"text": string,
"model": string,
"error?: string
}
Setup and Configuration
- Environment Variables
TOGETHER_API_KEY=your_api_key_here
OLLAMA_BASE_URL=http://localhost:11434
MODEL_NAME=llama2
PORT=3001
- Installation
npm install
Installing via Smithery
To install File Context MCP for Claude Desktop automatically via Smithery:
npx @smithery/cli@latest install @compiledwithproblems/file-context-mcp --client claude
- Running the Application
# Development
npm run dev
# Production
npm run build
npm start
How It Works
-
File Processing Flow
- Request received → Path validation → File reading → Content extraction
- Directory handling includes recursive file reading
- Content filtering based on file type
- File uploads are validated for type and size
- Secure file deletion with path validation
-
Context Processing
- File contents are aggregated
- Context is formatted with clear boundaries
- Large contexts are intelligently truncated
- Prompt formatting adds structure for LLM understanding
-
Model Integration
- Unified interface for different LLM providers
- Error handling and response normalization
- Configurable model parameters
Security Features
-
Path Sanitization
- Prevention of directory traversal attacks
- Path validation and normalization
- Safe file type checking
-
File Upload Security
- File type validation
- File size limits (5MB max)
- Secure file storage
- Safe file deletion
-
Input Validation
- Query content validation
- Model type verification
- Path structure verification
- File content validation
Supported File Types
The application supports the following text-based file types:
- Documentation:
.txt
,.md
- Code files:
.js
,.ts
,.jsx
,.tsx
,.py
,.java
,.cpp
,.c
,.h
- Configuration:
.json
,.yaml
,.yml
,.env
- Web files:
.html
,.css
- Data files:
.csv
,.xml
,.log
File type validation is enforced during:
- File uploads
- Context processing
- File reading operations
Maximum file size: 5MB per file
Error Handling
The application implements comprehensive error handling:
- File system errors
- API response errors
- Invalid input errors
- Model-specific errors
- File upload/deletion errors
Development
Project Structure
file-context-mcp/
├── src/
│ ├── server.ts # Main application server
│ ├── core/ # Core functionality
│ │ ├── fileSystem.ts # File operations handling
│ │ └── modelInterface.ts # LLM provider integrations
│ ├── utils/ # Utility functions
│ │ ├── fileUtils.ts # File type & path utilities
│ │ ├── promptUtils.ts # Prompt formatting
│ │ ├── validators.ts # Input validation
│ │ └── logger.ts # Application logging
│ ├── config/ # Configuration
│ │ └── config.ts # Environment & app config
│ └── resources/ # API specifications
│ └── file-context-api.yml # OpenAPI spec
├── storage/ # File storage directory
│ ├── code-samples/ # Example code files
│ └── notes/ # Documentation & notes
├── postman/ # API testing
│ └── File-Context-MCP.postman_collection.json # Postman collection
├── dist/ # Compiled output
└── node_modules/ # Dependencies
Adding New Features
-
New File Types
- Add extensions to
fileUtils.isTextFile()
- Implement specific handlers if needed
- Add extensions to
-
New Model Providers
- Extend
ModelInterface
class - Add provider to
validators.isValidModel()
- Implement provider-specific error handling
- Extend
Testing
Postman Collection
The project includes a Postman collection (postman/File-Context-MCP.postman_collection.json
) for testing all API endpoints. To use it:
-
Import the Collection
- Open Postman
- Click "Import" button
- Select or drag the
File-Context-MCP.postman_collection.json
file
-
Available Requests
File-Context-MCP ├── List files │ └── GET http://localhost:3001/api/files?path=./storage ├── Query │ └── POST http://localhost:3001/api/query (single file analysis) ├── Analyze multiple files │ └── POST http://localhost:3001/api/query (directory analysis) └── File Upload └── POST http://localhost:3001/api/files/upload
-
Testing File Operations
- List Files: View contents of the storage directory
- Upload File: Use form-data with key "file" and select a text file
- Query File: Analyze single file contents with LLM
- Analyze Directory: Process multiple files with LLM
-
Example Queries
// Single file analysis { "path": "./storage/code-samples/example.ts", "query": "Explain what this TypeScript code does", "model": "ollama" } // Directory analysis { "path": "./storage", "query": "What types of files are in this directory and summarize their contents?", "model": "ollama" }
-
File Upload Guide
- Use the "File Upload" request
- Select "form-data" in the Body tab
- Add key "file" with type "File"
- Choose a supported text file (see Supported File Types)
- Maximum file size: 5MB
Manual Testing
- Use the provided test files in
/storage
- Test different file types and queries
- Verify model responses and error handling
- Test file size limits and type restrictions
Environment Setup
Make sure to:
- Have the server running (
npm run dev
) - Configure environment variables
- Have Ollama running locally (for Ollama model)
- Set Together.ai API key (for Together model)
Future Considerations
- How to handle large files efficiently
- Expanding supported file types
- Optimizing context processing
- Adding streaming support for responses
- Implementing rate limiting and caching
This project demonstrates modern TypeScript/Node.js practices with a focus on modularity, type safety, and error handling while providing a flexible interface for LLM interactions with file-based context.
Recommended Servers
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Excel MCP Server
A Model Context Protocol server that enables AI assistants to read from and write to Microsoft Excel files, supporting formats like xlsx, xlsm, xltx, and xltm.
Playwright MCP Server
Provides a server utilizing Model Context Protocol to enable human-like browser automation with Playwright, allowing control over browser actions such as navigation, element interaction, and scrolling.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
Apple MCP Server
Enables interaction with Apple apps like Messages, Notes, and Contacts through the MCP protocol to send messages, search, and open app content using natural language.
contentful-mcp
Update, create, delete content, content-models and assets in your Contentful Space
serper-search-scrape-mcp-server
This Serper MCP Server supports search and webpage scraping, and all the most recent parameters introduced by the Serper API, like location.
The Verge News MCP Server
Provides tools to fetch and search news from The Verge's RSS feed, allowing users to get today's news, retrieve random articles from the past week, and search for specific keywords in recent Verge content.
MCP Server Trello
Facilitates interaction with Trello boards via the Trello API, offering features like rate limiting, type safety, input validation, and error handling for seamless management of cards, lists, and board activities.