What is TensorLake?
TensorLake is a specialized compute infrastructure for AI agents. It provides stateful sandbox infrastructure with dynamic capabilities for deploying agents and creating RL environments:
- MicroVM Isolation: Firecracker VMs with sub-200 millisecond startup time
- Stateful Suspend and Resume: Sandboxes suspend automatically when finished and resume for debugging or task reuse
- Clone: Running sandboxes can be cloned across the cluster to replicate environments after setup
Key Integration Features
1. Drop-in Scalability
Scale from 1 to 1,000 concurrent agents instantly. Switching to TensorLake in Harbor requires only a CLI flag change:
harbor run --task-name [my-benchmark] --dataset [my-dataset] --env tensorlake2. MicroVM Security
TensorLake uses MicroVMs to ensure complete isolation of agent-executed code from host infrastructure. This is critical when evaluating agents on untrusted code or complex benchmarks where potentially dangerous actions might be valid test cases.
3. Resource Control & GPU Support
The integration supports fine-grained resource control directly from Harbor config:
- Compute: Configurable vCPUs and RAM
- Storage: Ephemeral disk sizing
- GPUs: Native support for GPU-accelerated workloads, essential for agents performing local inference or data science tasks
4. State Management with Snapshots
Harbor leverages TensorLake's snapshot capabilities. Evaluations can start from pre-warmed states, reducing setup time for complex environments requiring heavy dependency installation.
TensorLake vs. Other Environments
Why choose TensorLake?
- vs. Daytona: While Daytona excels at persistent developer environments, TensorLake is optimized for the high-churn, ephemeral nature of agent loops where environments are created and destroyed rapidly
- vs. E2B: Both offer excellent MicroVM sandboxing. TensorLake is distinct in broader ecosystem integration (Indexify) for extraction and workflow orchestration, making it strong for agents within larger data processing pipelines
- vs. Modal: Modal excels at serverless GPU compute and batch ML jobs. TensorLake is optimized for stateful, long-running agent loops with native suspend/resume, live migration, and cloning that Modal doesn't support
Comparison Table
| Feature | E2B | Daytona | Modal | TensorLake | |---------|-----|---------|-------|-----------| | Primary Use Case | Code Execution | Dev Environments | Serverless Compute | Agent Infrastructure | | Cold Start Time | ~2s | ~150ms | ~500ms | MicroVM | | Filesystem | 1x Baseline | ~3.3x Baseline | 2x Baseline | 5x Baseline | | Auto Suspend/Resume | Yes | No | No | Yes | | Clone Sandboxes | No | No | No | Yes | | Point-in-Time Snapshots | No | Filesystem only | Alpha (7d TTL) | Yes | | Stateful Execution | Partial | Partial | Partial | Native | | Live Migration | No | No | No | Automatic | | GPU Support | No | No | Yes | Yes | | Scale Limit | Hundreds | Not Published | Thousands | Millions | | Bring Your Own Cloud | No | No | No | Yes | | Persistence | Ephemeral | Persistent Workspaces | Ephemeral | Snapshots & Ephemeral |
Getting Started
1. Install the SDK
pip install tensorlake2. Set your API Key
export TENSORLAKE_API_KEY="tl_..."3. Run your first task
harbor run --env tensorlake --task-name adaptive-rejection-sampler --dataset terminal-bench@2.0 --agent claude-code --model anthropic/claude-sonnet-4-6Debugging
Access TensorLake's native debugging tools through Harbor:
harbor env attach <session_id>This drops you directly into the running sandbox shell to observe agent behavior.