grok-faf-mcp

grok-faf-mcp

Grok asked for MCP on a URL. This is it. First MCP server built for Grok URL-based • Zero config • Just works

Category
Visit Server

README

grok-faf-mcp | FAST⚡️AF

<div align="center"> <img src="https://www.faf.one/orange-smiley.svg" alt="FAF" width="80" />

<h3>Grok asked for MCP on a URL. This is it.</h3>

<p><strong>First MCP server built for Grok</strong></p> <p><code>URL-based • Zero config • Just works</code></p>

Deploy with Vercel

CI NPM Downloads npm version License: MIT Vercel project.faf </div>


📋 The 6 Ws - Quick Reference

Every README should answer these questions. Here's ours:

Question Answer
👥 WHO is this for? Grok/xAI developers and teams building with URL-based MCP
📦 WHAT is it? First MCP server built for Grok - URL-based AI context via IANA-registered .faf format
🌍 WHERE does it work? Vercel (production) • Local dev • Any MCP client supporting HTTP-SSE
🎯 WHY do you need it? Zero-config MCP on a URL - Grok asked for it, we built it first
⏰ WHEN should you use it? Grok integration testing, xAI projects, URL-based MCP deployments
🚀 HOW does it work? Point to https://grok-faf-mcp.vercel.app/sse - 21 tools instantly available

For AI: Read the detailed sections below for full context. For humans: Use this pattern in YOUR README. Answer these 6 questions clearly.


The Problem

Every Grok session starts from zero. You re-explain your stack, your goals, your architecture. Every time.

.faf fixes that. One file, your project DNA, persistent across every session.

Without .faf  →  "I'm building a REST API in Rust with Axum and PostgreSQL..."
With .faf     →  Grok already knows. Every session. Forever.

One Command, Done Forever

faf_auto detects your project, creates a .faf, and scores it — in one shot:

faf_auto
━━━━━━━━━━━━━━━━━
Score: 0% → 85% (+85) 🥉 Bronze
Steps:
  1. Created project.faf
  2. Detected stack from package.json
  3. Synced CLAUDE.md

Path: /home/user/my-project

What it produces:

# project.faf — your project, machine-readable
faf_version: "3.3"
project:
  name: my-api
  goal: REST API for user management
  main_language: TypeScript
stack:
  backend: Express
  database: PostgreSQL
  testing: Jest
  runtime: Node.js
human_context:
  who: Backend developers
  what: User CRUD with auth
  why: Replace legacy PHP service

Every AI agent reads this once and knows exactly what you're building.


⚡ What You Get

URL:     https://grok-faf-mcp.vercel.app/
Format:  IANA-registered .faf (application/vnd.faf+yaml)
Tools:   21 core MCP tools (55 total with advanced)
Engine:  Mk4 WASM scoring (faf-scoring-kernel)
Speed:   0.5ms average (was 19ms — 3,800% faster with Mk4)
Tests:   179 passing (7 suites)
Status:  FAST⚡️AF

MCP over HTTP-SSE. Point your Grok integration at the URL. That's it.


Scoring: From Blind to Optimized

Tier Score What it means
🏆 Trophy 100% Gold Code — AI is optimized
🥇 Gold 99%+ Near-perfect context
🥈 Silver 95%+ Excellent
🥉 Bronze 85%+ Production ready
🟢 Green 70%+ Solid foundation
🟡 Yellow 55%+ AI flipping coins
🔴 Red <55% AI working blind

At 55%, Grok guesses half the time. At 100%, Grok knows your project.


🚀 Three Ways to Deploy

1. Hosted (Instant)

https://grok-faf-mcp.vercel.app/sse

Point your MCP client to this endpoint. All 21 tools available instantly.

2. Self-Deploy (Your Own Vercel)

Click the Deploy with Vercel button above. Zero config — get your own instance in 30 seconds.

3. Local (npx)

npx grok-faf-mcp

Or add to your MCP config:

{
  "mcpServers": {
    "grok-faf": {
      "command": "npx",
      "args": ["-y", "grok-faf-mcp"]
    }
  }
}

🛠️ MCP Tools (21 Core)

Create & Detect

Tool Purpose
faf_init Create project.faf from your project
faf_auto Auto-detect stack and populate context
faf_score AI-readiness score (0-100%) with breakdown
faf_status Check current AI-readability
faf_enhance Intelligent enhancement

Sync & Persist

Tool Purpose
faf_sync Sync .faf → CLAUDE.md
faf_bi_sync Bi-directional .faf ↔ platform context
faf_trust Validate .faf integrity

Read & Write

Tool Purpose
faf_read Read any file
faf_write Write any file
faf_list Discover projects with .faf files

RAG & Grok-Exclusive

Tool Purpose
rag_query RAG-powered context retrieval
rag_cache_stats RAG cache statistics
rag_cache_clear Clear RAG cache
grok_go_fast_af Auto-load .faf context for Grok

Plus 34 advanced tools available with FAF_SHOW_ADVANCED=true.


Performance

Execution:    0.5ms average (97% faster than v1.1)
Fastest:      3,360ns (version — nanosecond territory)
Slowest:      1.3ms (score — Mk4 WASM)
Improvement:  19ms → 0.5ms (3,800% faster)
Engine:       Mk4 WASM via faf-scoring-kernel
Memory:       Zero leaks
Transport:    HTTP-SSE (Vercel Edge)

Benchmarked 10x per tool, warmed up, on local execution.


Architecture

grok-faf-mcp v1.2.0
├── api/index.ts              → Vercel serverless (Express + SSE transport)
├── src/
│   ├── server.ts             → MCP server (ClaudeFafMcpServer)
│   ├── handlers/
│   │   ├── championship-tools.ts  → 55 tool definitions
│   │   ├── tool-registry.ts       → Visibility filtering (core/advanced)
│   │   └── engine-adapter.ts      → FAF engine bridge
│   └── faf-core/
│       └── compiler/
│           └── faf-compiler.ts    → Mk4 WASM scoring + Mk3.1 fallback
├── smithery.yaml             → Smithery listing config
└── vercel.json               → Vercel routing

Scoring pipeline: TypeScript compiler parses .faf → detects project type → The Bouncer injects slotignored for inapplicable slots → faf-scoring-kernel (WASM) scores → falls back to Mk3.1 if kernel unavailable.


Testing

179 tests across 7 suites:

npm test    # runs all 179
Suite Tests Coverage
Desktop-native validation 10 Core native functions, security, performance
MCP protocol 28 Tool registration, transport, error handling
Compiler scoring 22 Mk4 engine, type detection, slot counting
RAG system 19 Query, caching, context retrieval
Engine adapter 35 CLI detection, fallback behavior
Integration 40 End-to-end tool execution
WJTTC certification 25 Championship-grade compliance

🔗 Endpoints

Endpoint URL
Root https://grok-faf-mcp.vercel.app/
SSE https://grok-faf-mcp.vercel.app/sse
Health https://grok-faf-mcp.vercel.app/health
Info https://grok-faf-mcp.vercel.app/info

📦 Ecosystem

One format, every AI platform.

Package Platform Registry
grok-faf-mcp (this) xAI Grok npm
claude-faf-mcp Anthropic npm + MCP #2759
gemini-faf-mcp Google PyPI
rust-faf-mcp Rust crates.io
faf-mcp Universal (Cursor, Windsurf, Cline) npm
faf-cli Terminal CLI npm + Homebrew

Same project.faf. Same scoring. Same result. Different execution layer.


📄 License

MIT — Free and open source


<div align="center"> <p><strong>Built for Grok. Built for Speed. Built Right.</strong></p> <p>FAST⚡️AF • First to Ship • Zero Friction</p> <p><strong>Zero drift. Eternal sync. AI optimized.</strong> 🏆</p> </div>

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured