Tao
Tao

Contents

Cloudflare Containers Technical Analysis: Architecture, Applications, and Ecosystem

Contents

This section introduces the core concepts of Cloudflare Containers. It’s not a regular container hosting service, but a completely new way of computing. It inherits the serverless and programmable features of the Cloudflare developer platform.

Cloudflare Containers has a simple goal: let developers run code written in any programming language while integrating it into Workers applications, without managing the underlying infrastructure. This solves the limitations of Cloudflare Workers, such as being unable to run resource-intensive applications, not providing a complete Linux environment, and difficulty migrating existing container applications.

The platform’s core marketing message is “simple, global, and programmable.” The “global” aspect is shown in its “Region: Earth” deployment model, which abstracts all regional configurations so developers only need to deploy once to cover the globe. This approach is seen as a paradigm shift, not just an iterative improvement, going beyond the expensive and complex challenges that traditional virtualization and container technologies face when scaling globally.

The most important architectural feature of Cloudflare Containers is that each container instance is tightly bound to a Durable Object (DO) that manages it. This is the biggest difference between this platform and other container platforms.

The request flow is clear: User → Worker → Durable Object → Container. This is different from platforms like AWS Fargate or Google Cloud Run, where containers are usually accessed directly through load balancers or service endpoints.

In this architecture, the Durable Object acts as a “programmable sidecar.” It gives developers the ability to use JavaScript/TypeScript in the Worker/DO environment to have fine-grained control over the container’s entire lifecycle (start, stop, sleep), state management, and routing logic. To simplify this process, Cloudflare provides the @cloudflare/containers NPM package, which contains a Container class. This class extends the basic DurableObject class and wraps container-specific APIs and helper functions, abstracting the underlying complexity.

Forcing containers to couple with Durable Objects is a thoughtful strategic choice by Cloudflare that builds a powerful but relatively closed ecosystem. The logic chain works like this:

  1. The platform’s architecture requires that all interactions with containers must go through a Worker and its associated Durable Object. This means containers cannot be directly exposed to the public internet or accessed directly through traditional load balancers.
  2. This design forces developers to adopt the Workers/Durable Objects programming model (JavaScript/WebAssembly) to orchestrate and manage their containers.
  3. Unlike competitors (such as Fly.io providing language-agnostic HTTP APIs), Cloudflare’s approach requires developers to deeply integrate into its specific development paradigm.
  4. Therefore, this is not just a technical design, but a business strategy. It creates user “stickiness” to the Cloudflare developer platform through deep integration and creates high switching costs. If a team decides to migrate away from Cloudflare Containers, they need to redesign not just how containers are deployed, but the entire orchestration and routing layer. This contrasts sharply with the portability that standard Docker/Kubernetes deployments aim for.

By binding a uniquely addressable stateful entity (Durable Object) with a general computing environment (container), Cloudflare has created a completely new design pattern for stateful serverless applications.

  1. Traditional serverless computing (FaaS) is inherently stateless, with state usually offloaded to external databases or caches.
  2. Durable Objects were introduced to solve this problem. They provide a globally unique, single-threaded Actor with consistent storage co-located with it, perfect for coordination tasks like chat rooms or real-time gaming sessions.
  3. Cloudflare Containers leverages this mechanism perfectly. The idFromName(pathname) pattern in Worker code allows the Worker to deterministically route from a request to a unique Durable Object for a specific entity (like user session, document ID).
  4. This Durable Object can then start a dedicated container instance for that entity. The Durable Object holds “session state” or coordination logic, while the container executes heavy, general computing tasks, such as running AI models or code sandboxes.
  5. This architecture elegantly solves the common “per-tenant state” or “session affinity” problems in serverless models, which are usually very complex to implement on other platforms. It effectively combines the coordination capabilities of the Actor model (Durable Objects) with the workload flexibility of containers.

This section analyzes the technical details of how containers run on the Cloudflare network, including image distribution, instance management, and security isolation models.

Image Distribution: When developers run the wrangler deploy command, container images (must be linux/amd64 architecture) are pushed to Cloudflare’s private image registry and then automatically distributed to multiple nodes across the global network.

Warming and Deployment: To ensure fast startup, Cloudflare pre-schedules instances across the network and pre-fetches images. When a new container instance is requested through env.YOUR_CONTAINER.get(id), the system selects the nearest location with a warmed image to start the container.

Request Routing: After a container instance starts, all requests for the same instance ID are routed to this specific location, regardless of where new requests originate. This is important for stateful applications that need session persistence.

Idle and Shutdown: Developers can set a sleepAfter timeout for containers. After being idle for a period, containers automatically enter sleep mode to save resources (users only pay for container active time), and wake up when the next request comes. When containers stop, the system first sends a SIGTERM signal for graceful shutdown, and if it doesn’t shut down within 15 minutes, sends a SIGKILL signal to force shutdown. Note that container disks are temporary and the filesystem resets after each sleep or restart.

The main difference from Cloudflare Workers is that each Cloudflare Containers instance runs in its own dedicated virtual machine (VM). This design provides strong hardware-level isolation for other workloads on the Cloudflare network, which is more traditional and easier to understand than the V8 Isolate model.

In contrast, Workers use the V8 Isolate model specifically designed for extremely low overhead and high density, capable of running thousands of customer scripts in a single operating system process, ensuring security through memory-level separation. It prevents side-channel attacks by prohibiting native code, disabling precise timers, and multithreading.

Cloudflare’s own Remote Browser Isolation (RBI) product also uses container technology, further highlighting the importance of strong isolation. In RBI, each remote browser runs in a disposable, containerized instance, physically isolating users’ devices from potential network threats. Cloudflare Containers uses the VM model, following the same robust security boundary principles.

Cloudflare offers two different choices in the serverless computing space, each with different balances in performance, security, and functionality.

  1. Workers focus on speed and cost-effectiveness. They use lightweight V8 Isolates, achieving almost zero cold start times. But the cost of high performance is functional limitations: only supports JS/Wasm, can’t run arbitrary binaries, and the security model requires complex design to prevent side-channel attacks in multi-tenant environments.
  2. Containers focus on compatibility and security. They use a one-VM-per-instance model, can run any linux/amd64 binary, and provide a familiar model for developers with strong, understandable isolation boundaries.
  3. The cost of this compatibility and security is performance: cold start times are in seconds rather than milliseconds, and resource overhead is higher.
  4. This is not a platform flaw, but a strategic product decision. Cloudflare is actually sending a clear message to developers: “For super-fast, lightweight tasks, use Workers. For heavier, more complex workloads that need a complete Linux environment and can accept a few seconds of cold start time, use Containers.” This creates a clear decision framework within the Cloudflare platform.

The underlying technology supporting the “Region: Earth” model is a complex, two-tier global scheduling system built on Cloudflare’s own products.

  1. Cloudflare’s internal container platform (powering Workers AI and now public Containers) is custom-built because existing off-the-shelf solutions couldn’t meet their global scaling needs.
  2. The architecture consists of a global scheduler (built on Workers, Durable Objects, and KV) and multiple location-deployed local schedulers.
  3. The global scheduler makes high-level placement decisions (e.g., “this container needs GPU, send it to a location with available GPU capacity”), while local schedulers handle placing containers on specific physical servers (“metals”) within data centers.
  4. The entire system combines with Cloudflare’s Anycast network and an L4 packet forwarding layer called the “Global State Router” (using eBPF technology). This router dynamically maps virtual IPs to the most suitable containers based on container health, latency, and readiness.
  5. This tech stack represents a massive engineering investment that competitors can’t easily replicate. It allows Cloudflare to abstract away the concept of “regions” for end users, which is fundamentally different from cloud providers like AWS/GCP/Azure that still require users to think about and manage regional deployments. This extreme operational simplicity is their key competitive advantage.

This section provides an end-to-end developer workflow walkthrough, from local setup to global deployment, focusing on key configurations and code patterns.

Docker Dependency: Developers must run Docker locally because Wrangler uses it to build container images during wrangler deploy.

Wrangler CLI: Wrangler is the main tool for managing the entire lifecycle. Projects can be quickly initialized with the command npm create cloudflare@latest -- --template=cloudflare/templates/containers-template.

Local Development (wrangler dev): The wrangler dev command starts a local Worker that can route requests to a container built and running locally. It also supports hot reloading of Worker code, greatly improving development efficiency.

To use Containers in a project, you need to add a [[containers]] block to the Wrangler configuration file.

Key Fields:

  • binding: The binding name available in the Worker’s env object (e.g., MY_CONTAINER).
  • image: Path to a Dockerfile or a pre-built image in an image registry.
  • class_name: The name of the Durable Object class that will manage this container. This is the explicit link between the container definition and its DO orchestrator.
  • instance_type: Specifies the container’s resource specifications (dev, basic, or standard).

Additionally, the Durable Object associated with the container must be defined in the configuration using new_sqlite_classes.

This NPM package is the high-level abstraction layer for interacting with containers.

Container Class: Developers should inherit from this class rather than directly inheriting from DurableObject.

Lifecycle Hooks: The onStart(), onStop(), and onError() methods can be overridden to execute custom logic when container state changes.

Configuration Properties:

  • defaultPort: The port inside the container that listens, where fetch() requests will be proxied.
  • sleepAfter: Idle timeout (e.g., "30s", "5m").
  • envVars: A Record for passing environment variables to the container.
  • entrypoint: Override the container’s default entrypoint.

Core Methods:

  • container.fetch(request): Forward an HTTP request to the container’s defaultPort. It can also automatically handle WebSocket upgrade requests.
  • container.start() / container.stop(): Manually control the container lifecycle.

Deployment Command: npx wrangler deploy executes the complete process of building the image, pushing it to Cloudflare Registry, and deploying Worker code. First deployment may take several minutes for resource provisioning.

Atomic Updates (Note): During deployment, Worker code updates immediately while container images undergo rolling deployment. This means new Worker code must be backward compatible with old container code to avoid failures during the transition period.

Verification Commands:

  • npx wrangler containers list: Shows the status of container services in the account.
  • npx wrangler containers images list: Lists images pushed to the image registry.

Observability: Container logs and metrics can be viewed in the “Containers” section of the Cloudflare dashboard.

The entire workflow revolves around Wrangler and the Workers programming model, revealing the product’s target audience and design philosophy.

  1. The main operational interface is not docker-compose.yml or Kubernetes YAML files, but wrangler.toml and a JavaScript/TypeScript class.
  2. Orchestration logic like routing, custom scaling logic, and lifecycle hooks are all written in JavaScript, rather than using declarative Infrastructure as Code (IaC) tools common in the container world.
  3. This design makes the platform very easy to pick up and intuitive for existing Cloudflare Workers developers, as a natural extension of their current workflow.
  4. However, for a team coming from a pure Kubernetes or Docker Swarm background, this is a completely new paradigm. They must learn the Workers/Durable Objects model to effectively use Cloudflare Containers. This reinforces the point that Cloudflare Containers is a feature of the Workers platform, not an independent competitor to Kubernetes.

This section explores how to build applications on Cloudflare Containers, focusing on state management and proposing specific design patterns.

Stateless, Load-Balanced Services: Suitable for workloads like typical API backends where any instance can handle any request.

  • Pattern: After receiving a request, the Worker uses simple routing logic (e.g., the getRandom helper function in official examples) to select one from a fixed pool of container IDs for request forwarding.
  • Example: A service that uses FFmpeg to convert video files to GIFs. Any container instance can perform this conversion task.
  • Limitations: Currently, true auto-scaling and latency-based load balancing are not available, but are on the roadmap. Implementing such patterns currently requires manual over-provisioning.

Stateful, Individually Addressable Services: Suitable for workloads where requests for specific entities must be routed to the same dedicated container instance.

  • Pattern: The Worker uses env.MY_CONTAINER.idFromName(uniqueId) to get a persistent ID from request parameters (like session ID, document ID). This ensures all requests for that uniqueId are routed to the same Durable Object and its associated container.
  • Example: Secure code sandboxes for executing user-generated or AI-generated code, where each user gets their own isolated environment. Another example is WebSocket-based collaborative applications where users in the same “room” connect to the same container instance.

Temporary Disk Issue: The container’s local filesystem is temporary and gets cleared on restart or sleep. It’s only suitable for storing temporary data.

Pattern 1: Using Durable Objects for Session State Management: The Durable Object orchestrating the container is the ideal place to store small amounts of critical, strongly consistent session state.

  • Durable Object’s storage API provides a transactional, strongly consistent key-value store that’s co-located with the DO’s execution environment with extremely low latency.
  • Use Case: A collaborative whiteboard application. The Durable Object can store the whiteboard’s current state, user list, and cursor positions, while the container handles complex rendering or physics calculations. The DO serves as the single source of truth and coordination point.

Pattern 2: Using R2 for Persistent Object Storage: For large unstructured data (like user uploads, generated artifacts, logs), containers should read and write to Cloudflare R2.

  • R2 is an S3-compatible object storage service whose biggest advantage is zero egress fees, making it extremely cost-effective.
  • Use Case: The aforementioned FFmpeg container can read source videos from an R2 bucket and write generated GIFs back to R2. This way, the container itself remains stateless.
  • Integration: Containers can access R2 through its S3-compatible API, or be accessed by the orchestrating Worker through R2 bindings. The roadmap plans to provide first-party APIs for directly mounting R2 buckets from containers, which will further simplify this pattern.

Pattern 3: Using D1 or Hyperdrive for Relational Data: For structured relational data, containers can connect to databases.

  • D1: Cloudflare’s serverless SQLite database, suitable for lightweight relational data needs.
  • Hyperdrive: A connection pooling service that can accelerate connections to existing PostgreSQL or MySQL databases, enabling containers to efficiently interact with traditional databases hosted elsewhere.

The platform’s design strongly discourages storing state inside containers, which is an architectural guidance.

  1. Container disks are temporary, and any data written will be lost after restart.
  2. The sleepAfter feature, as a core cost-saving mechanism, requires containers to be shut down and restarted at any time without losing data.
  3. All recommended design patterns involve externalizing state to other Cloudflare products: using Durable Objects for coordination and session state management, using R2 for object storage, and using D1 for relational data.
  4. This architecture actually forces developers to adopt a “state decoupling” mindset. Containers become pure computing engines, while state management is handled by specialized, persistent services on the platform. This is itself a best practice in modern cloud-native design, and Cloudflare’s architecture enforces this through its design.

This section provides multi-dimensional comparisons to help decision-makers understand Cloudflare Containers’ positioning in the competitive landscape.

This is the most fundamental choice facing Cloudflare platform developers.

  • Choose Workers when: You need sub-millisecond cold start times, logic can be expressed in JS/TS/Wasm, resource requirements are low (e.g., less than 128MB memory), and you’re building event-driven features like API middleware or request transformation.
  • Choose Containers when: You need to run legacy applications, use languages other than JS/TS/Wasm (like Python, Go, Rust, Java), need a complete Linux filesystem or specific binaries (like FFmpeg, Pandoc), or need larger memory/CPU allocations (up to 4GiB memory during Beta). The prerequisite is that a few seconds of cold start time is acceptable for the use case.

Table 1: Feature and Capability Comparison: Workers vs. Containers

Feature/Capability Cloudflare Workers Cloudflare Containers Analysis and Recommendations
Runtime Environment V8 Isolate (sandboxed) Dedicated Virtual Machine (VM) Workers provide ultimate performance and density but have language limitations. Containers provide a fully compatible Linux environment but with higher overhead.
Language Support JavaScript, TypeScript, WebAssembly Any language that can be packaged as a linux/amd64 Docker image Containers greatly expand the language ecosystem of the Cloudflare platform, crucial for migrating existing applications.
Cold Start Time Sub-millisecond About 2-3 seconds (Beta) Workers are suitable for latency-critical scenarios. Containers are suitable for asynchronous or long-running tasks that can tolerate second-level startup delays.
Max CPU/Memory 128 MB memory, 10-50ms CPU/request Up to 4 GiB memory, 1/2 vCPU (Beta) Containers solve Workers’ shortcomings in resource-intensive computing.
Filesystem Access No persistent filesystem access Temporary, writable Linux filesystem Containers can run traditional tools and libraries that need to read/write temporary files.
State Management Model Stateless (relies on KV, R2, DOs) Stateless by default (orchestrated by DO, relies on R2, D1) Containers’ state model more explicitly defines containers as compute units with state managed by external services.
Security Model Isolate memory isolation, no native code Hardware-level VM isolation Containers provide more traditional, stronger security boundaries, suitable for running untrusted code.
Ideal Use Cases API gateways, edge logic, authentication, A/B testing Code sandboxes, batch processing, media processing, legacy app backends The two are complementary, not competitive. Choose the right tool based on task characteristics.

Cloudflare Containers: Provides ultimate operational simplicity. No need to manage clusters, configure nodes, or maintain control planes. Deployment requires only a wrangler deploy command. It’s a fully managed, opinionated platform.

Kubernetes/Docker: Provides maximum control and flexibility. Developers need to manage the entire tech stack, from networking (CNI) and storage (CSI) to service discovery and orchestration logic. It has a huge and mature tool ecosystem (like Helm, Prometheus), but also comes with enormous operational complexity and a steep learning curve.

Core Difference: Cloudflare Containers abstracts “how to do it” (orchestration), letting developers focus on “what to do” (application code). Kubernetes exposes “how to do it” as configurable APIs to developers. You could say Cloudflare Containers is a Platform as a Service (PaaS), while Kubernetes is an Infrastructure as a Service (IaaS)/Container as a Service (CaaS) framework.

Architectural Philosophy:

  • Cloudflare: Durable Object-centric orchestration, enforcing a specific, programmable interaction model. Global by default.
  • Fargate/Cloud Run: More traditional service/endpoint model. Developers need to configure load balancers and VPCs (for Fargate). Scaling usually based on requests. They are inherently regional, requiring manual configuration for multi-region deployment.

Networking:

  • Cloudflare: Only supports HTTP/WebSocket ingress through Worker proxy. No direct public TCP/UDP access.
  • Fargate/Cloud Run: Provide more flexible networking options, with Fargate deeply integrated with AWS VPC.

Vendor Integration:

  • Cloudflare: Deep and tight integration with its own platform (Workers, R2, DOs). This is both an advantage (seamless experience) and disadvantage (vendor lock-in).
  • Fargate/Cloud Run: Deep integration with their respective cloud ecosystems (IAM, CloudWatch, S3, etc.).

Table 2: Serverless Container Platform Showdown: Cloudflare vs. Fargate vs. Cloud Run

Attribute Cloudflare Containers AWS Fargate Google Cloud Run
Orchestration Model Durable Object as programmable sidecar Task Definition Service / Revision
Deployment Unit Worker + container image Task / Pod (EKS) Container image
Global Deployment Global by default (“Region: Earth”) Regional, requires manual cross-region deployment and configuration Regional, requires manual cross-region deployment and configuration
Network Ingress HTTP/WebSocket only (through Worker) Flexible (ALB/NLB, VPC) Flexible (internal/external HTTP(S) load balancer)
Scaling Model Manual (Beta), planned auto-scaling Based on CPU/memory/requests (ECS/EKS) Based on requests, configurable concurrency
State Management Relies on DOs, R2, D1 (externalized) Relies on EFS, S3, RDS (externalized) Relies on Filestore, GCS, Cloud SQL (externalized)
Key Integrations Workers, Durable Objects, R2 ECS, EKS, IAM, VPC, CloudWatch GCS, Pub/Sub, Cloud Build, IAM
Primary Control Interface Wrangler CLI, JS/TS Container class AWS CLI/SDK, CloudFormation gcloud CLI/SDK, YAML configuration

This section provides quantitative data needed for evaluation, including resource limits and detailed pricing models.

Instance Types:

  • dev: 256 MiB memory, 1/16 vCPU
  • basic: 1 GiB memory, 1/4 vCPU
  • standard: 4 GiB memory, 1/2 vCPU
  • Larger instance types are planned.

Account-Level Concurrency Limits:

  • Total memory: 40 GB
  • Total vCPU: 20
  • Total disk: 100 GB
  • These are temporary limits during Beta and will be increased in the future.

Image Limits:

  • Maximum image size: 2 GB
  • Total image storage per account: 50 GB.

Networking:

  • No support for inbound TCP/UDP requests from end users.

Durable Objects Limitations:

  • DOs orchestrating containers are also subject to their own limits, such as a soft limit of 1,000 requests per second per object, and storage limits that vary by backend type.

Table 3: Technical Specifications and Resource Limits (Beta)

Feature/Limit Value (Workers Paid Plan) Notes/Roadmap
Instance Types (Memory/vCPU) dev (256MiB, 1/16), basic (1GiB, 1/4), standard (4GiB, 1/2) Larger instances planned.
Account Concurrent Memory 40 GB Beta limit, will be increased.
Account Concurrent vCPU 20 vCPU Beta limit, will be increased.
Account Concurrent Disk 100 GB Beta limit, will be increased.
Maximum Image Size 2 GB -
Total Image Storage 50 GB -
Network Ingress HTTP/WebSocket only (through Worker) No TCP/UDP plans currently.
Scaling Method Manual (get(id)) Auto-scaling and latency-aware routing are top roadmap priorities.

Core Requirement: Using Containers requires subscribing to the Workers Paid plan ($5 per month).

Compute Billing: Billed per 10 milliseconds of active runtime.

  • Memory: $0.0000025 per GiB-second
  • CPU: $0.000020 per vCPU-second
  • Disk: $0.00000007 per GB-second
  • The Workers Paid plan includes monthly free allowances for each of the above.

Network Egress: Priced per GB, varies by region, and includes monthly free allowances.

Additional Costs: This is a critical and often overlooked component.

  • Workers Requests/Duration: The ingress Worker routing to containers will be billed according to standard Workers pricing.
  • Durable Objects Requests/Duration/Storage: The DO orchestrating containers will be billed according to standard DO pricing.
  • R2/D1/KV Storage and Operations: Any backend storage services used will incur their own charges.

High Utilization Workloads: For continuously running services, Cloudflare Containers may be more expensive than always-on PaaS or VPS options. A cost comparison for a 2vCPU/4GB memory instance running 24/7 shows Cloudflare Containers (~$130/month) being more expensive than AWS Fargate (~$71/month) and Digital Ocean (~$50/month).

Bursty/Idle Workloads: The “scale-to-zero” capability achieved through sleepAfter timeout is its key cost advantage. For workloads that are idle most of the time (like cron jobs, on-demand user sandboxes), Containers may be much cheaper than paying for always-on instances.

Egress Cost Advantage: R2’s zero egress fees can save huge costs for applications serving large amounts of data, potentially offsetting higher compute costs.

Table 4: Detailed Pricing Model and Cost Estimation Scenarios

Part A: Pricing Components

Component Unit Price (USD) Workers Paid Plan Included Allowance
Container Memory GiB-second $0.0000025 25 GiB-hours/month
Container CPU vCPU-second $0.000020 375 vCPU-minutes/month
Container Disk GB-second $0.00000007 200 GB-hours/month
Network Egress (NA/EU) GB $0.025 1 TB/month
Worker Requests Million requests (Standard Workers pricing) 10 million/month
DO Requests Million requests (Standard DO pricing) (Standard DO pricing)
DO Duration GB-second (Standard DO pricing) 400,000 GB-seconds/month
DO Storage GB-month (Standard DO pricing) 1 GB/month

Part B: Cost Scenario Analysis

Scenario Description Cloudflare Containers Estimated Cost AWS Fargate Estimated Cost Analysis
Scenario 1: Bursty Cron Job (runs 10 minutes daily, standard instance) Very low (~$1-2/month) Higher (need to pay for minimum instance continuously or on-demand startup overhead) Cloudflare’s “scale-to-zero” and 10ms billing model has extreme cost advantages in this scenario.
Scenario 2: High Utilization API (24/7 running, standard instance) Higher (~$77/month) Medium (~$35/month, 0.5 vCPU/4GB) For continuous high load, “always-on” models like Fargate that bill by hour/minute may be more economical.
Scenario 3: Stateful User Sandboxes (1000 users, each active 30 minutes daily, basic instance) Medium (depends on concurrency) Extremely high and complex (need to implement session affinity and tenant isolation manually) Cloudflare’s DO+Container model natively supports this use case with far lower operational costs than manually building on Fargate.

Here’s a summary of planned key features designed to address many current limitations:

  • Global Auto-scaling and Latency-aware Routing: This is the most anticipated feature, which will allow true serverless scaling of stateless container pools through a simple configuration flag.
  • Higher Limits and Larger Instances: Increase account-level concurrency limits and individual instance resource caps to support more demanding workloads.
  • Deeper Platform Integration: Provide first-party APIs for easier interaction with other Cloudflare services (like directly mounting R2 buckets).
  • Enhanced Communication Capabilities: Provide an exec command allowing shell command execution in containers from within Workers, and support for containers to initiate requests to Workers.
  • Co-location of Durable Objects and Containers: Implement running DOs and their managed containers on the same physical server to reduce latency in the request path.

Suitable Scenarios:

  • On-demand Isolated Environments: Suitable for running user code, AI model inference, or temporary development environments. DO’s addressing pattern is naturally suited for these scenarios.
  • Cron-based Scheduled Batch Jobs: Running resource-intensive tasks at specified times (like daily reports, data processing), where the “scale-to-zero” model can bring significant cost savings.
  • Migrating Lightweight Applications: Migrating existing container applications with moderate performance requirements, especially those already using the Cloudflare ecosystem.

Exercise Caution For:

  • High-performance Low-latency APIs: Cold start times and additional hops through Durable Objects may cause unacceptable latency. Workers are recommended for these scenarios.
  • Services Requiring TCP/UDP Ingress: Currently not supported.
  • Applications Requiring Persistent Filesystems: Temporary disks are not suitable for traditional databases or CMS; state must be externalized.

Architecture Recommendations:

  • Treat Cloudflare Containers as a specialized computing tool, not a general container orchestrator. It should be part of event-driven architecture on the Cloudflare developer platform.
  • Design “platform-native” applications that fully leverage the unique Worker → DO → Container pattern.
  • Build a hybrid architecture: use Workers for high-traffic, low-latency edge logic, delegate heavy, complex, or non-JS tasks to Containers. Meanwhile, use R2, D1, and KV as unified state and storage layers.

Cloudflare Containers is a powerful and innovative addition to the serverless space, providing a unique and highly programmable model for running containerized workloads. Its main strengths lie in deep integration with the Workers platform and “global by default” architecture, which greatly simplifies operations for global applications.

However, in its current Beta version, it’s not a universal replacement for Kubernetes or other serverless container platforms. Its unique, Durable Object-centric architecture and current limitations make it best suited for specific use cases that align with its design philosophy.

For organizations already invested in the Cloudflare ecosystem, it opens up a new world of possibilities. For external evaluators, the key decision is whether accepting a proprietary but powerful ecosystem is worth the operational simplicity and unique stateful serverless patterns. The platform’s future success will largely depend on delivering its ambitious roadmap, particularly in auto-scaling and performance optimization.

Related Content