| title | Compute private beta |
|---|---|
| description | Early access to the new microVM-based compute runtime. |
| sidebarTitle | Compute private beta |
| hidden | true |
MicroVM compute is a new runtime for your tasks, designed for fast cold starts via boot snapshots. For most tasks the migration is transparent - the same task code runs unchanged.
You can't opt in yourself. Once we've enabled the private beta for your org, the us-east-1-next region becomes available on the Regions page in the dashboard. If you'd like access, ping us on Slack.
MicroVM compute is exposed as a region: us-east-1-next. The region is tagged with a microVM badge on the Regions page.
The region option is part of TriggerOptions, accepted by most triggering functions.
import { yourTask } from "./trigger/your-task";
await yourTask.trigger(payload, { region: "us-east-1-next" });This is the safest way to opt in during the beta - you control exactly which runs land on microVM compute.
To gate it on an environment variable so prod isn't affected, define one constant and reuse it at every call site:
const defaultRegion = process.env.USE_COMPUTE_BETA === "1" ? "us-east-1-next" : undefined;
await yourTask.trigger(payload, { region: defaultRegion });Set USE_COMPUTE_BETA=1 in the staging environment of the app that calls trigger() (typically Vercel, or wherever you deploy your app).
Open the Regions page in the dashboard and set us-east-1-next as the project-wide default. This applies to all environments in the project, so only do this on a project that is running staging traffic only.
A few things to be aware of during the beta:
small-1xis the default and what we've optimized for. Boot snapshots are precreated forsmall-1xonly - other sizes generate the snapshot lazily on first run after a deploy.small-2x,medium-1x, andmedium-2xwork, but the first run after each deploy is slower while the boot snapshot is generated. Subsequent runs use it.large-1xandlarge-2xaren't supported yet. Stick tosmall-*ormedium-*for now.- Avoid
microduring the beta. Cold starts onmicroare noticeably slower than other sizes. - All sizes are capped at 1GB of disk during the beta. The machine size table lists 10GB as the target, but every preset is currently provisioned with 1GB regardless.
We'll be shipping CLI and SDK changes during the beta to make cold start times more consistent across machine sizes, lift the disk cap, and unlock the larger presets. Compute-specific prereleases will be announced on this page as we go, and we'll also reach out on Slack.
Send anything weird (errors, slow runs, restore failures, anything that surprises you) over Slack with the run ID. The more reproductions we get during the beta, the faster we harden it for public beta.