Note: Linux is required for developing on bare metal. This is a work in progress. Not everything will function as expected.
| Requirement | Minimum | Notes |
|---|---|---|
| OS | Linux (bare metal or VM with nested virtualization) | KVM support required for Firecracker |
| CPU | 4+ cores recommended | Firecracker spawns microVMs |
| RAM | 8 GB minimum, 16 GB recommended | Huge pages alone reserve ~4 GB |
| Go | - | See go.work for the exact version |
| Docker / Docker Compose | Docker Engine 24+ with Compose v2 | docker compose version to verify |
| Node.js / npm | Required for building base template | node --version |
| gsutil | Google Cloud SDK CLI | Used to download prebuilt kernels and firecrackers |
KVM check — the orchestrator runs Firecracker microVMs, which require /dev/kvm:
ls -l /dev/kvm
# If missing: sudo modprobe kvm_intel (or kvm_amd)
# For cloud VMs, ensure nested virtualization is enabled.-
Load the NBD (Network Block Device) kernel module:
sudo modprobe nbd nbds_max=64
Verify:
lsmod | grep nbdshould show the module loaded. -
Enable huge pages:
sudo sysctl -w vm.nr_hugepages=2048
Verify:
grep HugePages_Total /proc/meminfoshould show2048.
To persist these across reboots, add
nbd nbds_max=64to/etc/modules-load.d/nbd.confandvm.nr_hugepages=2048to/etc/sysctl.d/99-hugepages.conf.
Requires gsutil from the Google Cloud SDK.
make download-public-kernelsdownload linux kernelsmake download-public-firecrackersdownload firecracker versions
Verify:
ls packages/fc-kernels/ # should contain kernel binaries
ls packages/fc-versions/builds/ # should contain firecracker binaries (executable)make local-infraThis runs the following Docker Compose services (defined in packages/local-dev/docker-compose.yaml):
| Service | Purpose | Port(s) |
|---|---|---|
| postgres | Primary database (users, teams, templates) | 5432 |
| clickhouse | Analytics and sandbox metrics | 8123 (HTTP), 9000 (native) |
| redis | Caching, pub/sub, orchestrator coordination | 6379 |
These services provide logging, tracing, and metrics. The application runs without them, but you will see connection errors in logs. For a minimal setup you can comment them out in packages/local-dev/docker-compose.yaml.
| Service | Purpose | Port(s) |
|---|---|---|
| grafana | Dashboards and log/trace viewer | 53000 |
| loki | Log aggregation | 3100 |
| tempo | Distributed tracing | 3200 |
| mimir | Metrics storage (Prometheus-compatible) | — |
| memcached | Cache for Tempo | 11211 |
| otel-collector | OpenTelemetry collector | 4317 (gRPC), 4318 (HTTP) |
| vector | Log forwarder (ships to Loki) | 30006 |
Verify: docker compose -f packages/local-dev/docker-compose.yaml ps — all services should show running (except tempo-init which exits after setup).
-
Initialize the Postgres database:
make -C packages/db migrate-local
-
Initialize the ClickHouse database:
make -C packages/clickhouse migrate-local
-
Build envd (embedded in sandbox templates):
make -C packages/envd build
Verify:
ls packages/envd/bin/envdexists. -
Seed the database with a local dev user, team, and API tokens:
make -C packages/local-dev seed-database
This creates a user (
user@e2b-dev.local), a team (local-dev team), and the API tokens shown in Client configuration below.
These commands launch each service in the foreground. You need multiple terminal windows (or use tmux/screen).
-
API server (terminal 1):
make -C packages/api run-local
Verify:
curl -s http://localhost:3000/healthreturns a response. -
Orchestrator + template manager (terminal 2):
make -C packages/orchestrator build-debug && sudo make -C packages/orchestrator run-localsudois required because the orchestrator manages Firecracker microVMs, which need root access for KVM, networking (TAP devices), and cgroup management.Verify:
curl -s http://localhost:5008/healthreturns a response. -
Client proxy (terminal 3):
make -C packages/client-proxy run-local
Verify:
curl -s http://localhost:3003/healthreturns a response.
Once all services are running:
make -C packages/shared/scripts local-build-base-templateThis instructs the orchestrator to create the base sandbox template, which is required before you can create sandboxes.
After starting all services and building the base template, verify end-to-end operation:
# Create a sandbox using the local API
curl -s -X POST http://localhost:3000/sandboxes \
-H "X-API-Key: e2b_53ae1fed82754c17ad8077fbc8bcdd90" \
-H "Content-Type: application/json" \
-d '{"templateID": "base"}'A successful response returns a JSON object with a sandboxID. You can then interact with the sandbox through the client proxy at localhost:3002.
Using the Python or JS SDK:
import e2b
# Configure for local development
sandbox = e2b.Sandbox(
api_key="e2b_53ae1fed82754c17ad8077fbc8bcdd90",
api_url="http://localhost:3000",
sandbox_url="http://localhost:3002",
template="base",
)
print(sandbox.id) # sandbox is running locallyE2B_API_KEY=e2b_53ae1fed82754c17ad8077fbc8bcdd90
E2B_ACCESS_TOKEN=sk_e2b_89215020937a4c989cde33d7bc647715
E2B_API_URL=http://localhost:3000
E2B_SANDBOX_URL=http://localhost:3002These tokens are generated by the seed-database step and are hardcoded for local development.
| Service | URL / Connection string |
|---|---|
| Grafana | http://localhost:53000 |
| Postgres | postgres:postgres@127.0.0.1:5432 |
| ClickHouse (HTTP) | http://localhost:8123 |
| ClickHouse (native) | clickhouse:clickhouse@localhost:9000 |
| Redis | localhost:6379 |
| OTel Collector (gRPC) | localhost:4317 |
| OTel Collector (HTTP) | localhost:4318 |
| Vector | localhost:30006 |
| e2b API | http://localhost:3000 |
| e2b Client Proxy | http://localhost:3002 |
| e2b Orchestrator | http://localhost:5008 |
Each service reads its configuration from a .env.local file in its package directory. These are checked into the repo with working defaults for local development — you should not need to modify them.
| File | Purpose |
|---|---|
packages/api/.env.local |
API server config: database connections, Redis, OTEL, volume tokens |
packages/orchestrator/.env.local |
Orchestrator config: storage paths, firecracker/kernel dirs, Redis |
packages/client-proxy/.env.local |
Client proxy config: edge URL, service discovery, Redis |
packages/shared/scripts/.env.local |
Build scripts: API keys and URL for template builds |
tests/integration/.env.local |
Integration test config: API URL, test credentials |
The orchestrator requires KVM to run Firecracker microVMs.
- Bare metal: Load the module:
sudo modprobe kvm_intel(Intel) orsudo modprobe kvm_amd(AMD). - Cloud VM: Enable nested virtualization. On GCP: stop the VM, set
--enable-nested-virtualization, restart. On AWS: use a.metalinstance type. - WSL2: KVM is not supported. Use a Linux VM with nested virtualization instead.
If make local-infra fails with "address already in use", check which service is conflicting:
sudo ss -tlnp | grep <port>Common conflicts: PostgreSQL on 5432, Redis on 6379, or another Grafana on 3000 (the local Grafana uses 53000 to avoid this).
The orchestrator manages Firecracker microVMs, which require root privileges for:
/dev/kvmaccess- Creating TAP network interfaces
- Managing cgroups for microVM resource limits
- Mounting NBD devices
This is expected. Always run the orchestrator with sudo.
Install the Google Cloud SDK: https://cloud.google.com/sdk/docs/install
You do not need to authenticate — the public builds bucket allows anonymous access.
Ensure ClickHouse is fully started before running migrations. After make local-infra, wait a few seconds for ClickHouse to initialize, then retry:
make -C packages/clickhouse migrate-localIf you see memory allocation errors from Firecracker, verify huge pages are allocated:
grep HugePages /proc/meminfoIf HugePages_Free is 0, the system may not have enough free memory. Stop other applications or increase RAM.