Dynamic Workflows: Custom Durable Execution for Every Tenant

By

Cloudflare's platform has evolved from direct developer tool to a multi-tenant ecosystem where AI agents, SaaS platforms, and CI/CD pipelines need to run custom code per tenant. After introducing Dynamic Workers for compute, Durable Object Facets for storage, and Artifacts for version control, the next frontier is durable execution that adapts to each tenant's unique logic. This Q&A explores Dynamic Workflows, the new primitive that bridges the gap between pre-deployed workflows and dynamic, tenant-specific orchestration.

1. What is Dynamic Workflows?

Dynamic Workflows is Cloudflare's extension of its durable execution engine, Workflows, to support tenant-specific workflow code at runtime. Unlike traditional Workflows where the step function is compiled as part of your deployment, Dynamic Workflows lets you hand the engine a piece of code that can be different for every tenant, agent, or session. That code defines the orchestration logic — each step survives failures, can sleep for hours, wait for external events, and resume exactly where it left off. The system spins up an isolated Worker for each workflow instance, running the tenant's custom TypeScript in a sandboxed environment. This means platforms like AI app generators or CI/CD services can now offer durable, long-running pipelines without pre-declaring every possible workflow class.

Dynamic Workflows: Custom Durable Execution for Every Tenant
Source: blog.cloudflare.com

2. How does Dynamic Workflows bridge durable execution and dynamic deployment?

Durable execution ensures that workflows survive crashes, scale across isolates, and maintain state across timeouts. Dynamic deployment allows code to be injected at runtime without a full deploy. Dynamic Workflows combines both: the engine manages the lifecycle of each workflow instance, including state persistence and retries, while the workflow code is loaded dynamically per tenant. When a tenant triggers a workflow, the platform compiles and caches their unique step function, attaches it to a Worker runtime in milliseconds, and executes it with full durability. The result is a system where you don't need to know all workflows upfront — they emerge from tenant behavior. This solves a core limitation of traditional Workflows, which assumed one fixed class per deployment.

3. What problem does Dynamic Workflows solve for multi-tenant platforms?

Multi-tenant platforms — like AI-powered app builders, CI/CD services, or agent SDKs — need each tenant to define their own business logic. With standard Workflows, the platform owner had to pre-bundle every possible workflow class. That's impossible when tenants generate code at runtime (e.g., AI writes TypeScript per request) or when each tenant needs a unique pipeline (e.g., a CI/CD repo defines its own steps). Dynamic Workflows lets the platform accept arbitrary workflow code from tenants, execute it with guaranteed durability, and still manage capacity and isolation. The platform sits as a supervisor, while each tenant gets their own sandboxed, durable execution environment. This enables truly customizable SaaS without sacrificing reliability.

4. How does Dynamic Workflows work under the hood?

When a tenant submits a workflow definition (a run function), the Dynamic Workflows engine validates, compiles, and caches it. On the first run, a Worker is dynamically created on the same machine as the caller, in single-digit milliseconds. The engine passes the tenant's code into the Worker's runtime, which then executes steps sequentially. Each step yields state to a Durable Object for persistence. If the isolate crashes, the engine restarts the worker and replays from the last saved step. The system supports up to 50,000 concurrent instances per account (in Workflows V2) and 300 new instances per second. Dynamic Workflows also integrates with Durable Object Facets for per-tenant SQLite storage and Artifacts for version-controlled filesystems, giving each workflow a full stack of dynamic primitives.

Dynamic Workflows: Custom Durable Execution for Every Tenant
Source: blog.cloudflare.com

5. What are practical use cases for Dynamic Workflows?

Consider an AI platform where an agent writes and runs its own tools. The agent generates a step function to process data, fetch APIs, and retry on failure. Dynamic Workflows executes that as a durable workflow without human pre-configuration. Another example: a CI/CD product where each repository defines its pipeline as code. The platform can treat each commit's pipeline as a new workflow instance, with steps that compile, test, and deploy — all surviving build failures. Multi-stage billing systems where each tenant has custom logic (e.g., discounts, tiers) can also benefit. Because the workflow is dynamic, tenants can update their logic without redeploying the platform. Even onboarding flows that vary per user become easy — each user gets a durable plan defined at registration time.

6. How does Dynamic Workflows relate to Dynamic Workers and Durable Object Facets?

Dynamic Workers solved the compute side: you hand the Workers runtime code at runtime and get an isolated Worker. Durable Object Facets extended the same idea to storage — each tenant gets an on-demand SQLite database. Artifacts added source control, a versioned filesystem per tenant. Dynamic Workflows completes the picture by bringing durable execution to this dynamic stack. Now a tenant can have their own code (Dynamic Worker), their own database (Facet), their own filesystem (Artifact), and their own orchestration (Dynamic Workflow). All four primitives work together: a workflow step can spin up a Dynamic Worker, read from a Facet, and write to an Artifact. The platform sees each tenant as a cohesive, isolated unit across compute, storage, versioning, and execution.

7. What are the key benefits of Dynamic Workflows for developers?

Developers gain three major benefits: Flexibility — they no longer restrict tenants to predefined workflows; any code that forms a valid step function works. Isolation — each tenant's workflow runs in a separate sandbox, so misbehaving logic cannot affect others. Durability — the engine automatically persists state, handles retries, and resumes after crashes, even with dynamically loaded code. Performance remains high because worker creation is near-instant and runs on the same machine. Additionally, the integration with existing Cloudflare primitives means developers can combine compute, storage, and workflows without complex glue code. For platforms, this means lower operational overhead — they don't need to manage per-tenant deployments of workflow classes. The result is a scalable, secure, and customizable durable execution platform.

Related Articles

Recommended

Discover More

4 Lightweight Linux Distros That Breathed New Life Into My 4GB LaptopWorm Plague Hits Industrial Systems: Email Attacks Surge in Q4 2025BIOS Settings Overload: Experts Warn Most Toggles Are 'Noise', Critical Ones Often Overlooked10 Reasons Why Standalone Python Apps Are So Challenging to CreateMajor Sports Unions Urge CFTC to Ban Prediction Market Bets on Player Underperformance