Lightspeed AI native sandboxes.
Stateful compute for durable agentic loops + isolated tool/code execution. Pause mid-run, resume hours later in the exact state you left.
Free tier included. No credit card required.
import { SandboxClient } from "tensorlake";
async function main() {
const client = new SandboxClient();
const sbx = await client.createAndConnect({
image: "tensorlake/ubuntu-minimal",
cpus: 1, memoryMb: 1024,
});
const result = await sbx.run("/bin/sh", { args: ["-c", "npm install && npm run build"] });
console.log(result.stdout);
}
main();
Other sandboxes force a tradeoff. We refuse to make one.
Sandboxes for running agent harnesses.
python · typescript
Run the agent itself inside an isolated, stateful computer.
The sandbox mode for browsing agents, research harnesses, and long-running sessions that need files, bash, packages, and working state — instead of running on the app server.
Give the agent its own filesystem, shell, packages, and processes instead of sharing the app server runtime.
Sandbox sleeps on inactivity, wakes instantly when invoked.
Near-SSD speed in a VM, 2× faster than Vercel, 5× faster than E2B.
Compile code, run databases, process 5GB files. Bring any Linux stack.
// Run Claude Code agent inside an isolated sandbox
import { Sandbox } from "tensorlake";
const sbx = await Sandbox.create({ image: "ubuntu-minimal" });
await sbx.exec("npm i -g @anthropic-ai/claude-agent-sdk");
await sbx.exec(
"claude -p 'Refactor src/**/*.ts for stricter types'"
);
The fastest sandbox file system.
fio · sqlite · p50
SQLite benchmark — 2 vCPU, 4 GB RAM, 100k inserts
View benchmark on GitHub→In our published SQLite benchmark across Tensorlake, Vercel, E2B, Daytona, and Modal, Tensorlake was the fastest across default, fsync, and large-dataset runs. Benchmark setup: 2 vCPU / ~4 GB sandboxes, 3 runs.
Isolated execution environments for running tools.
ephemeral · per-call
Create isolated sandboxes only when a tool needs risky or heavy execution.
The pattern for code interpreters, browser helpers, and tool-calling agents that should not run untrusted code inside the harness itself. Keep the harness outside; spin up sandboxes per call.
Execute code, browsers, or system tasks in a separate sandbox so the main agent never shares its runtime.
Predictable throughput means fresh sessions spin up immediately, even when a thousand others are mid-task.
Every session gets its own sandbox so untrusted code can't touch system integrity or leak data across sessions.
Hardware virtualization boundary per call. LLMs can't escape the sandbox to touch your data.
// Claude agent with a Tensorlake sandbox as its code-exec tool
import Anthropic from "@anthropic-ai/sdk";
import { Sandbox } from "tensorlake";
const claude = new Anthropic();
const sbx = await Sandbox.ephemeral({ image: "ubuntu-minimal" });
const msg = await claude.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
tools: [{
name: "run_in_sandbox",
description: "Run code in an isolated Tensorlake sandbox.",
input_schema: { type: "object", properties: { code: { type: "string" } } }
}],
messages: [{ role: "user", content: "Plot fib(20) as a line chart." }],
});
// Dispatch Claude's tool call into the sandbox
const call = msg.content.find(b => b.type === "tool_use");
const out = await sbx.exec(`python -c ${JSON.stringify(call.input.code)}`);
console.log(out.stdout);
Infrastructure for RL rollouts and evals.
10k+ envs / fan-out
Prepare once, clone many
Snapshot a warmed environment — deps, weights, data — and clone it thousands of times in parallel. Pay once for setup.
Known starting state
Files, packages, processes and seeds are reproducible across every rollout. No flakey drift across workers.
Scale rollouts & evals
Fan out to 10k+ concurrent environments. Checkpoint at any step, resume at any step, write to object storage.
Dynamic resource alloc
Choose CPU, memory, GPU and image per environment. Rightsize rollouts to minutes of wall-clock.
Sandbox-native orchestration for agents.
endpoints · durability
Once sandbox usage turns into a real application, Orchestrate coordinates it.
The layer that adds application endpoints, durability, fan-out, retries, and application-level observability on top of sandbox execution.
Expose sandbox-backed workflows as callable applications instead of stitching together raw VM APIs.
Predictable throughput means fresh sessions spin up immediately, even when a thousand others are mid-task.
Dormant sandboxes resume on incoming traffic. Every session gets its own sandbox so nothing leaks across runs.
Durable primitives for long-running agentic flows. Application observability baked in.
# PDF → Markdown with Claude
from tensorlake.applications import application, function
from anthropic import Anthropic
claude = Anthropic()
@application()
@function()
def to_markdown(pdf_url: str) -> str:
pdf = fetch(pdf_url)
msg = claude.messages.create(
model="claude-sonnet-4-5",
max_tokens=8192,
messages=[{"role": "user", "content": [
{"type": "document", "source": pdf},
{"type": "text", "text": "Convert to clean Markdown."},
]}],
)
return msg.content[0].text
Used by engineering teams shipping agents in production.
n = 14 interviews
Tensorlake let us ship faster and stay reliable from day one. Complex stateful AI workloads that used to require serious infra engineering are now just long-running functions. As we scale, that means we can stay lean — building product, not managing infrastructure.
At SIXT, we're building AI-powered experiences for millions of customers while managing the complexity of enterprise-scale data. Tensorlake gives us the foundation we need — reliable document ingestion that runs securely in our VPC to power our generative AI initiatives.
Tensorlake enabled us to avoid building and operating an in-house OCR pipeline by providing a robust, scalable OCR and document ingestion layer with excellent accuracy and feature coverage.
With Tensorlake, we've been able to handle complex document parsing and data formats that many other providers don't support natively, at a throughput that significantly improves our application's UX. The team's responsiveness stands out.
Run it in our cloud — or yours.
SOC 2 · HIPAA
Bring Tensorlake into your cloud.
Run sandboxes and applications inside your own AWS / GCP / Azure account when you need lower egress, stricter network boundaries, dedicated capacity or more predictable performance.
Security built for agentic workflows.
LLM-generated code runs in isolated VMs, not shared processes. Full audit trails, per-project data boundaries, and compliance for regulated workloads.