E2B Code Interpreter
Python & JS/TS SDK for running AI-generated code/code interpreting in your AI app - GitHub - lawrenciumLr103/code-interpreter: Python & JS/TS SDK for running AI-generated code/code interpreting in your AI app
lawrenciumLr103
README
<p align="center"> <img width="100" src="https://raw.githubusercontent.com/e2b-dev/E2B/refs/heads/main/readme-assets/logo-circle.png" alt="e2b logo"> </p>
<h4 align="center"> <a href="https://pypi.org/project/e2b/"> <img alt="Last 1 month downloads for the Python SDK" loading="lazy" width="200" height="20" decoding="async" data-nimg="1" style="color:transparent;width:auto;height:100%" src="https://img.shields.io/pypi/dm/e2b?label=PyPI%20Downloads"> </a> <a href="https://www.npmjs.com/package/e2b"> <img alt="Last 1 month downloads for the JavaScript SDK" loading="lazy" width="200" height="20" decoding="async" data-nimg="1" style="color:transparent;width:auto;height:100%" src="https://img.shields.io/npm/dm/e2b?label=NPM%20Downloads"> </a> </h4>
<!--- <img width="100%" src="/readme-assets/preview.png" alt="Cover image"> --->
What is E2B?
E2B is an open-source infrastructure that allows you run to AI-generated code in secure isolated sandboxes in the cloud. To start and control sandboxes, use our JavaScript SDK or Python SDK.
Run your first Sandbox
1. Install SDK
JavaScript / TypeScript
npm i @e2b/code-interpreter
Python
pip install e2b-code-interpreter
2. Get your E2B API key
E2B_API_KEY=e2b_***
3. Execute code with code interpreter inside Sandbox
JavaScript / TypeScript
import { Sandbox } from '@e2b/code-interpreter'
const sandbox = await Sandbox.create()
await sbx.runCode('x = 1')
const execution = await sbx.runCode('x+=1; x')
console.log(execution.text) // outputs 2
Python
from e2b_code_interpreter import Sandbox
with Sandbox() as sandbox:
sandbox.run_code("x = 1")
execution = sandbox.run_code("x+=1; x")
print(execution.text) # outputs 2
4. Check docs
Visit E2B documentation.
5. E2B cookbook
Visit our Cookbook to get inspired by examples with different LLMs and AI frameworks.
Recommended Servers

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
AIO-MCP Server
🚀 All-in-one MCP server with AI search, RAG, and multi-service integrations (GitLab/Jira/Confluence/YouTube) for AI-enhanced development workflows. Folk from
Persistent Knowledge Graph
An implementation of persistent memory for Claude using a local knowledge graph, allowing the AI to remember information about users across conversations with customizable storage location.
Hyperbrowser MCP Server
Welcome to Hyperbrowser, the Internet for AI. Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation. Built specifically for AI developers, it eliminates the headaches of local infrastructure and performance bottlenecks, allowing you to

Any OpenAI Compatible API Integrations
Integrate Claude with Any OpenAI SDK Compatible Chat Completion API - OpenAI, Perplexity, Groq, xAI, PyroPrompts and more.
Exa MCP
A Model Context Protocol server that enables AI assistants like Claude to perform real-time web searches using the Exa AI Search API in a safe and controlled manner.
BigQuery
This is a server that lets your LLMs (like Claude) talk directly to your BigQuery data! Think of it as a friendly translator that sits between your AI assistant and your database, making sure they can chat securely and efficiently.
Web Research Server
A Model Context Protocol server that enables Claude to perform web research by integrating Google search, extracting webpage content, and capturing screenshots.