Tao
Tao

Contents

The Complete Cloudflare Wrangler Guide: From Local Development to Global Deployment

Contents

Cloudflare Wrangler is the official command-line interface (CLI) tool designed for building, managing, and deploying applications on Cloudflare’s developer platform. Think of it as the main bridge between your local development environment and Cloudflare’s global network, letting you seamlessly interact with all of Cloudflare’s serverless products. As part of the broader Cloudflare Workers SDK, Wrangler works hand-in-hand with create-cloudflare (C3) project scaffolding tool and Miniflare local simulator to create a complete and efficient development toolchain.

Wrangler isn’t just a deployment tool anymore – it’s evolved into a comprehensive platform CLI. Initially, it was all about managing Cloudflare Workers, but as Cloudflare’s developer platform has grown, so has Wrangler’s capabilities. Today, it can handle the entire suite of serverless products including Cloudflare Pages, D1 databases, R2 object storage, Workers KV key-value storage, Queues messaging, Hyperdrive database accelerator, and Vectorize vector databases.

This evolution reflects Cloudflare’s strategic direction – delivering an integrated, full-stack serverless development experience. When developers master Wrangler, they’re essentially learning how to harness the entire Cloudflare ecosystem, making Wrangler the central entry point for building any application on the Cloudflare platform.

Wrangler plays a crucial role in Cloudflare’s serverless ecosystem, supporting everything from simple Worker scripts and static websites to complex full-stack applications and APIs. The tool’s multi-language support – including JavaScript, TypeScript, Python, and Rust – really highlights the flexibility and broad appeal of the Cloudflare platform.

Wrangler’s design philosophy is all about providing a unified command-line experience for managing your application’s entire lifecycle. You can use it to initialize projects, develop and test locally, configure and bind cloud resources, and finally deploy your application to Cloudflare’s global network. This unified toolchain simplifies the development process, reduces the mental overhead of switching between different tools, and ultimately makes you more productive. Whether you’re deploying a static frontend app or building a complex backend API that connects to multiple data stores, Wrangler has the commands and configuration options you need to get the job done.

To use Wrangler effectively, you need to understand several key concepts behind it. These concepts form the foundation of Cloudflare’s serverless platform and deeply influence how Wrangler is designed and what it can do.

Edge computing is the core value proposition of Cloudflare Workers. Unlike traditional centralized cloud models, edge computing deploys your code across Cloudflare’s global network of data centers, putting it physically closer to end users. When a user makes a request, it gets routed to the nearest edge node for processing instead of traveling to some distant central server. This architecture dramatically reduces network latency, improving your application’s response times and overall user experience. Wrangler’s deploy command is what actually distributes your code to this global edge network.

Bindings are the key mechanism for connecting your Worker to other Cloudflare resources. Technically speaking, a binding is a variable available in your Worker’s runtime environment, typically accessed through the env object. This variable provides a programming interface that lets your Worker interact with KV namespaces, D1 databases, R2 buckets, or even other Worker services. For example, by binding a D1 database to a variable called DB, you can use env.DB.prepare(...).run() in your code to execute SQL queries. The concept of bindings is fundamental to building feature-rich, complex applications – it decouples compute (your Worker) from state (storage, databases, etc.) and manages everything declaratively through Wrangler’s configuration files.

Cloudflare Workers run on a high-performance JavaScript/WebAssembly runtime called workerd. Unlike containers used by many other serverless platforms, workerd is based on V8 Isolates technology. An Isolate is a lightweight execution context that provides memory isolation while avoiding the overhead of spinning up a complete operating system. This architecture enables Workers to achieve near-zero cold start times because starting a new Isolate is much faster than starting a container. workerd is open source, and it powers not only production environments but also Wrangler’s local development environment, ensuring high consistency between development and production.

Before you can start using Wrangler, you need to make sure your development environment meets some basic requirements. First up, you’ll need a Cloudflare account – this is essential for deploying and managing any Cloudflare services. Next, your local system needs to have Node.js and its package manager npm installed. Wrangler supports Node.js Current, Active, and Maintenance versions. Many tutorials and features specifically require Node.js version 16.17.0 or higher.

To avoid potential permission issues when installing npm packages globally and to easily switch between different Node.js versions, Cloudflare strongly recommends using a Node version manager like nvm or Volta. Finally, Wrangler itself has operating system requirements that align with workerd runtime support: macOS 13.5+, Windows 11, and Linux distributions that support glib 2.35.

There are several ways to install Wrangler, and which method you choose depends on your project needs and personal preferences. In recent years, best practices have shifted from global installation to project-local installation.

Cloudflare’s official documentation recommends installing Wrangler as a local development dependency for each project. The benefits are pretty obvious: it ensures that all team members use exactly the same version of Wrangler, avoiding potential issues caused by version mismatches. This also makes your project’s build process more predictable and reproducible, and allows you to roll back specific projects to earlier Wrangler versions when needed.

  • Using npm: npm i -D wrangler@latest
  • Using yarn: yarn add -D wrangler@latest
  • Using pnpm: pnpm add -D wrangler@latest

This shift from global to local installation reflects the maturity of the JavaScript ecosystem and Wrangler’s evolution as a critical development tool. It’s no longer seen as a simple, standalone system-level tool, but as an essential, version-controlled part of your project’s build and deployment toolchain. This approach aligns with best practices for other modern development tools like ESLint and Prettier, aiming to fundamentally solve the classic “works on my machine” problem.

Many older tutorials and third-party guides commonly show the global installation method: npm install -g wrangler. The convenience of global installation is that you can run wrangler commands directly from any directory on your system. However, this approach can lead to conflicts between different projects that depend on different Wrangler versions. While global installation is still available, local installation has become the preferred choice for modern web development.

For macOS users, you can install via Homebrew: brew install cloudflare-wrangler. For Rust developers or on certain ARM architecture systems where the npm installer might not work properly, you can also install using cargo: cargo install wrangler.

After installation, the next step is authorizing Wrangler to access your Cloudflare account. wrangler login is the main command for this task. This command initiates a standard OAuth 2.0 authorization flow. When you run it, it automatically opens your default web browser and navigates to Cloudflare’s login and authorization page. After you log in and approve Wrangler’s access request, Cloudflare generates an OAuth token that Wrangler securely stores for future use in performing management operations on your behalf.

For headless environments like SSH-connected remote servers, wrangler login will display a URL in the terminal. You can copy this URL and open it on any device with a browser. After completing the authorization flow, the CLI will automatically obtain authentication.

To verify that authentication was successful, you can run the wrangler whoami command. This command displays information about the currently logged-in Cloudflare user and the associated account ID – it’s a good verification step.

In automated environments like continuous integration/continuous deployment (CI/CD), interactive login isn’t practical. In these cases, the preferred authentication method is using a Cloudflare API token provided to Wrangler through the CLOUDFLARE_API_TOKEN environment variable. While the wrangler config command also supports configuration with API tokens, passing them through environment variables is more flexible and secure, making it the standard practice for automated workflows.

The most standard and recommended way to start a new project is using the create-cloudflare (C3) command-line tool. Just run this command in your terminal:

bash

npm create cloudflare@latest

C3 guides you through an interactive setup process. It’ll ask for your project name, which template to use (like a “Hello World” Worker, static website, or full-stack app), development language (JavaScript or TypeScript), and whether you want to initialize a Git repository. After completing these steps, C3 automatically creates a new project directory with all the necessary files and configuration, and installs Wrangler as a local development dependency by default.

As an alternative to C3, the wrangler init command can create a skeleton wrangler.toml configuration file in an existing directory. This command is useful if you prefer to manually clone a Git repository or start from an existing project.

A standard Wrangler project generated by C3 contains a well-structured set of files and directories that together define your project’s behavior and dependencies.

  • wrangler.jsonc (or wrangler.toml): This is your project’s core configuration file and Wrangler’s “source of truth.” It defines your Worker’s name, main entry file, compatibility date, routing rules, and bindings to other Cloudflare resources like D1, R2, and KV.
  • src/index.ts (or .js): This is your Worker’s source code entry file. All your business logic starts here. This file must export a default object containing a fetch handler to respond to incoming HTTP requests.
  • package.json: This is the standard Node.js project manifest file. It records your project’s metadata (like name and version), dependencies (like Hono framework or database clients), and a set of executable script commands (like dev for starting the local server and deploy for publishing to Cloudflare).
  • node_modules/ and package-lock.json: These are standard parts of the Node.js ecosystem, used for storing installed dependency packages and locking dependency versions to ensure environment consistency.
  • public/ or dist/ directory: For projects that include static assets (like full-stack apps or static websites), there’s usually a directory for storing HTML, CSS, JavaScript, and image files. Wrangler uploads this directory’s contents during deployment.

The default template generated by C3 provides a minimal but fully functional Worker script – this is the best starting point for understanding how Workers operate.

javascript

export default {
  async fetch(request, env, ctx) {
    return new Response("Hello World!");
  },
};

This code’s structure and behavior deliberately mimics the standard Service Worker API from the web platform. This design choice dramatically lowers the barrier to entry for frontend and web developers getting into backend development, because it lets them leverage their existing web standards knowledge to build server-side logic.

  • export default {… }: This is the standard ES module syntax for defining a Worker. Your Worker’s entry point must be a default-exported object.
  • async fetch(request, env, ctx): This is the core event handler for processing HTTP requests. Whenever a request hits a route assigned to this Worker, the Cloudflare runtime calls this function.
    • request: A standard Request object containing all information about the inbound HTTP request, like URL, method, headers, and body.
    • env: A crucial object that contains all of your Worker’s bindings. Environment variables, secrets, and connection handles to services like KV, D1, and R2 are all provided to your code through the env object.
    • ctx: The execution context object that provides some advanced features. The most commonly used is ctx.waitUntil(), which lets you continue executing some async tasks (like writing logs or analytics data) after you’ve already sent the response back to the client, without blocking the response.
  • return new Response(“Hello World!”): The fetch handler must return a standard Response object. This object represents the HTTP response that will be sent back to the client, including status code, headers, and body.

This design based on web standard APIs isn’t accidental – it’s a strategic decision by Cloudflare. Instead of creating a proprietary, platform-specific API like some other serverless platforms, they chose to embrace open web standards. This means any developer familiar with modern web development, especially those who’ve used Service Workers for PWAs or offline functionality, can transition almost seamlessly to Cloudflare Workers development, greatly shortening the learning curve and making the platform more appealing.

wrangler dev is one of the most essential commands in the Wrangler CLI – it starts up a local development server that lets you rapidly iterate and test your Worker locally before deploying to production.

In earlier versions, wrangler dev worked by proxying local requests to Cloudflare’s edge network. While this approach guaranteed high consistency with production environments, the network latency led to a slower development experience. To solve this problem, the community developed Miniflare, a purely local Worker simulator that became popular for its lightning-fast feedback loops.

Eventually, Cloudflare officially adopted Miniflare and deeply integrated it into Wrangler. Starting with Wrangler v3, “local-first” became the default mode for wrangler dev. More importantly, this local server is now powered by workerd – the exact same open-source C++ runtime that Cloudflare uses to run Workers in production globally. This change brought revolutionary improvements: developers now have a local simulation environment that’s nearly identical to production, dramatically reducing “works locally, breaks in production” issues.

When running wrangler dev, all bindings defined in your wrangler.jsonc (like KV, R2, D1, and Queues) automatically connect to local, memory-based or file-based simulation implementations. This lets developers develop and test for free and fast, without consuming any production resources. Local data persists between wrangler dev sessions by default, and you can specify a persistence directory using the --persist-to flag.

Modern wrangler dev provides a rich set of tools designed to optimize your development “inner loop” (the code-run-debug cycle).

  • Live Reloading: When you save any changes to your code files, the local development server automatically reloads your Worker without needing a manual restart. This instant feedback dramatically improves development efficiency.
  • DevTools Integration: Press the d key in the terminal running wrangler dev, and Wrangler will open a Chrome DevTools instance connected to your local Worker. Through this familiar interface, you can inspect network requests, view console.log output, analyze CPU and memory usage, and perform interactive debugging.
  • Breakpoint Debugging: Wrangler supports breakpoint debugging in major IDEs like VS Code and WebStorm. This typically requires creating a simple configuration in your project’s .vscode/launch.json file to attach your IDE’s debugger to the inspector port exposed by the Wrangler development server (port 9229 by default). Once configured, you can set breakpoints in your code and pause execution when requests hit them, inspect variables, and examine the call stack.

While local simulation is incredibly powerful, there are times when developers still need to connect their locally running Worker code to real resources deployed on Cloudflare (like a D1 database containing production data or a new service not yet supported in the local simulator). Wrangler provides two modes to support this hybrid development approach.

  • New Method (experimental_remote): This is the recommended way for remote development. You can set "experimental_remote": true for individual bindings in your wrangler.jsonc file. This creates a hybrid development environment: your Worker code still runs on your local machine, enjoying the benefits of fast reloading, but API calls to that specific binding are transparently proxied to the real resource in the cloud.
  • Legacy Method (wrangler dev –remote): The old --remote flag is still available. Rather than running workerd locally, it deploys your code to a temporary preview URL and then tunnels local requests to that remote preview instance. This mode isn’t as performant as local-first mode, but it’s useful for testing edge features that aren’t yet supported by the local simulator.

The architectural evolution of wrangler dev reflects Cloudflare’s deep understanding of and significant investment in developer experience. By open-sourcing their core runtime workerd and making it the foundation of the local simulator, Cloudflare successfully solved a core pain point in serverless development: balancing development speed with production fidelity. This ability to provide a local development environment that’s nearly identical to production not only builds developer confidence but also significantly reduces environment-related bugs, forming a powerful competitive advantage for the Cloudflare platform.

Wrangler uses a configuration file to customize your Worker’s development and deployment settings. Starting with Wrangler v3, it supports both JSON (wrangler.jsonc) and TOML (wrangler.toml) formats, with wrangler.jsonc being the recommended format for new projects.

A crucial best practice is treating this configuration file as the single “source of truth” for your Worker configuration. This means all configuration – including routes, environment variables, and resource bindings – should be declaratively managed in the wrangler.jsonc file. While you can make changes in the Cloudflare dashboard, the next deployment via wrangler deploy will overwrite those changes with what’s in the file, unless you’ve explicitly configured special options like keep_vars = true. This approach ensures your configuration is version-controlled, reviewable, and prevents configuration drift between your local code and cloud state.

A deployable Worker’s minimal configuration requires three core parameters:

  • name (string): Your Worker script’s name.
  • main (string): Path pointing to your Worker’s entry file, like "src/index.ts".
  • compatibility_date (string): A date in YYYY-MM-DD format. This parameter is crucial – it “pins” your Worker to a specific version of the workerd runtime. This prevents future Cloudflare runtime updates (which might include breaking changes) from automatically applying to your Worker, ensuring long-term application stability.
  • compatibility_flags (array): An array of strings that lets you selectively enable new, potentially backward-incompatible runtime features without changing your compatibility_date.

Some other commonly used top-level parameters include:

  • workers_dev (boolean): Controls whether to deploy your Worker to your *.workers.dev subdomain.
  • routes or route: Defines URL patterns where your Worker takes effect on your custom domains.

Wrangler’s powerful environment management feature lets you define multiple configuration sets for the same application within a single wrangler.jsonc file – for example, for development, staging, and production environments. Environments are defined under the env key. When you deploy using the --env flag (like wrangler deploy --env staging), Wrangler creates a new Worker named <project-name>-<environment-name> (like my-worker-staging). This allows you to configure different routes, environment variables, and resource bindings for each environment, achieving complete isolation between environments.

Most top-level configuration keys (like main, compatibility_date) are inherited by environments. However, a key design decision is that bindings (like kv_namespaces, d1_databases, r2_buckets) and variables (vars) are not inheritable. This means even if you define database bindings in your top-level configuration, you still must explicitly redefine them in each environment (like env.staging).

This non-inheritance rule is a deliberate safety design. It prevents one of the most common configuration mistakes – accidentally reading from or writing to production databases in staging environments – by forcing developers to explicitly specify resources for each environment. If bindings were inheritable, developers could easily forget to override production database bindings when adding a new staging environment, potentially leading to catastrophic data contamination. Wrangler’s configuration pattern acts as an important safety guardrail by making the “safe approach” (configuring independent resources for each environment) the required path, embodying a key principle of good developer tool design: making it harder to do the wrong thing.

Bindings are the glue that connects your Worker to other parts of the Cloudflare platform. They’re defined in wrangler.jsonc and accessed in your Worker code through the env object. Here are some binding configuration examples for core services:

  • Workers KV: {"kv_namespaces": [...]}
  • D1 Databases: {"d1_databases": [...]}
  • R2 Object Storage: {"r2_buckets": [...]}
  • Environment Variables: {"vars": { "API_URL": "https://api.example.com" }}
  • Secrets: For security reasons, sensitive information (like API keys) shouldn’t be written directly to vars, but should be managed through the wrangler secret command (see Section 8 for details).

Beyond these, Wrangler also supports bindings for many other services like Queues, Durable Objects, Workers AI, Hyperdrive, Vectorize, and more, further cementing Wrangler’s position as a platform-level orchestration tool.

Configuration Block Purpose wrangler.jsonc Example wrangler.toml Example
Top-level keys Define basic Worker properties {"name": "my-worker", "main": "src/index.ts", "compatibility_date": "2024-01-01"} name = "my-worker" main = "src/index.ts" compatibility_date = "2024-01-01"
KV namespace binding Connect KV namespace to Worker {"kv_namespaces": [...]} [[kv_namespaces]] binding = "MY_KV" id = "..."
D1 database binding Connect D1 database to Worker {"d1_databases": [...]} [[d1_databases]] binding = "DB" database_name = "prod-db" database_id = "..."
R2 bucket binding Connect R2 bucket to Worker {"r2_buckets": [...]} [[r2_buckets]] binding = "ASSETS" bucket_name = "prod-assets"
Environment definition Define specific config for different deployment stages (like staging) {"env": {"staging": {"vars": {"ENVIRONMENT": "staging"}}}} [env.staging] vars = { ENVIRONMENT = "staging" }

Wrangler isn’t just a development and deployment tool – it’s also a powerful cloud resource manager. It provides a series of subcommands that let developers perform CRUD (Create, Read, Update, Delete) operations on Cloudflare’s various storage and data services directly from the terminal.

Service Command Description Example
KV kv:namespace create Create a new KV namespace wrangler kv:namespace create MY_KV
kv:key put Write a key-value pair wrangler kv:key put --namespace-id=... "my-key" "my-value"
kv:key get Read a key’s value wrangler kv:key get --namespace-id=... "my-key"
kv:key list List all keys in namespace wrangler kv:key list --namespace-id=...
D1 d1 create Create a new D1 database wrangler d1 create my-database
d1 execute Execute a SQL command or SQL file wrangler d1 execute my-database --command "SELECT * FROM users"
d1 migrations apply Apply all pending database migrations wrangler d1 migrations apply my-database
d1 list List all D1 databases in account wrangler d1 list
R2 r2 bucket create Create a new R2 bucket wrangler r2 bucket create my-bucket
r2 object put Upload a file to R2 bucket wrangler r2 object put my-bucket/image.png --file=./image.png
r2 object get Download a file from R2 bucket wrangler r2 object get my-bucket/image.png --file=./download.png
r2 bucket list List all R2 buckets in account wrangler r2 bucket list
Queues queues create Create a new message queue wrangler queues create my-queue
queues list List all queues in account wrangler queues list

Workers KV is a globally distributed key-value store designed for high-read, low-latency scenarios. Wrangler provides a complete set of kv commands to manage it.

Use wrangler kv:namespace create <BINDING_NAME> to create a new KV namespace. This command not only creates the resource but also directly outputs the configuration code snippet you need to add to your wrangler.jsonc file – super convenient! wrangler kv:namespace list and wrangler kv:namespace delete are used for listing and deleting namespaces respectively.

wrangler kv:key put is used for writing data – you can provide values directly or read from a file using the --path flag, which is useful for writing larger values or binary content. The get, list, and delete commands are used for reading, listing, and deleting key-value pairs respectively. All these operations require specifying the namespace to operate on through the --namespace-id flag.

For scenarios that need to handle large amounts of data, the wrangler kv:bulk command provides efficient bulk write and delete functionality by reading from a specifically formatted JSON file to execute operations.

D1 is Cloudflare’s native serverless SQL database built on SQLite. Wrangler’s d1 subcommands are the primary tool for interacting with D1.

wrangler d1 create <DATABASE_NAME> is used for creating new databases. The list, delete, and info commands are used for basic database lifecycle management.

wrangler d1 execute is a very powerful command that can directly execute SQL queries (via --command) or execute from SQL files (via --file) – the latter is perfect for initializing database schemas. Through the --local and --remote flags, you can precisely control whether the command acts on your local development database or deployed remote database.

D1 has a powerful, built-in migration system that’s completely managed through Wrangler. wrangler d1 migrations create <DB_NAME> <MIGRATION_MESSAGE> creates a new, versioned SQL file in your project’s migrations directory where you can write DDL statements (like CREATE TABLE, ALTER TABLE). wrangler d1 migrations apply <DB_NAME> checks all unapplied migration files and applies them to the database in order, allowing you to evolve your database schema in a controlled, versioned way.

R2 is Cloudflare’s S3-compatible object storage service, with its main feature being zero egress fees. Wrangler’s r2 command set provides complete management capabilities.

Similar to KV and D1, wrangler r2 bucket create, list, and delete are used for basic bucket lifecycle management.

wrangler r2 object put is used for uploading files. It requires a target path in bucket/key format and specifies the local file path through the --file flag. wrangler r2 object get and delete are used for downloading and deleting objects in R2 respectively.

Cloudflare Queues is a message queue service for decoupling application components and handling async tasks. wrangler queues create <QUEUE_NAME> is used to create a new queue. wrangler queues consumer add <QUEUE_NAME> <SCRIPT_NAME> is a key command that designates a Worker script as a consumer for a specific queue. This means whenever a message is sent to that queue, the Cloudflare platform will invoke this Worker to process the message.

wrangler deploy is the core command for publishing your Worker application from your local development environment to Cloudflare’s global edge network. When you run this command, Wrangler performs a series of operations: it builds and bundles your code and its dependencies according to the configuration in wrangler.jsonc, then uploads the generated script to Cloudflare. At the same time, it applies all the settings declared in the configuration file, such as routing rules, environment variables, and resource bindings, completing the entire deployment process.

Wrangler’s environment functionality integrates tightly with the deploy command, enabling fine-grained control over different deployment stages (like development, staging, and production). To deploy to a specific environment, just append the --env (or -e) flag to the command. For example, running wrangler deploy --env staging tells Wrangler to read the configuration from the env.staging section in your wrangler.jsonc file. This makes it possible to deploy a completely independent configuration for staging environments, including using different routes, connecting to staging databases, and setting specific environment variables. If you don’t provide the --env flag, Wrangler defaults to using the top-level configuration in the config file, which is typically designated as the production environment configuration.

Wrangler is designed with automation in mind, making it easy to integrate into any continuous integration/continuous deployment (CI/CD) pipeline. Cloudflare officially provides cloudflare/wrangler-action, a tool specifically designed for GitHub Actions and the recommended way to achieve automated deployment.

A typical CI/CD workflow looks like this:

  1. Trigger: Configure the workflow in a .github/workflows/deploy.yml file to automatically trigger when code is pushed to specific branches (like main or staging).
  2. Authentication: The workflow uses a CLOUDFLARE_API_TOKEN stored in GitHub Secrets to securely authenticate wrangler-action.
  3. Deploy: wrangler-action checks out the code and executes deployment. You can specify the deployment target by passing an environment parameter to the action, which is equivalent to using the --env flag on the command line.
  4. Secrets Management: You can securely pass GitHub Secrets to your Worker in GitHub Actions workflows, and wrangler-action will handle uploading them using the wrangler secret put command.

Wrangler’s environment-aware configuration combined with first-class CI/CD integration creates a powerful GitOps workflow. In this pattern, the wrangler.jsonc configuration file and Git branches become the single source of truth for infrastructure. Developers trigger a predictable, auditable deployment process by committing code and configuration changes to Git. All infrastructure changes are managed through Pull Requests, which not only provides clear change history but also dramatically reduces the risk of deployment errors from manual operations.

In any application, securely managing sensitive information (like API keys, database passwords, and authentication tokens) is crucial. Wrangler provides a powerful set of tools and workflows to tackle this challenge, with the core principle of balancing local development convenience with production environment security.

  • Core Principle: Never store any sensitive information in plain text in the vars section of wrangler.jsonc, and never commit it to version control systems (like Git).
  • Production Secrets: For Workers deployed to Cloudflare, use the wrangler secret put <SECRET_NAME> command to upload sensitive data. Wrangler encrypts and stores these values, they’re accessible in your Worker code through the env object, but their values become invisible in the dashboard or CLI after creation, ensuring security. To set secrets for specific environments, use the --env flag, for example: wrangler secret put DB_PASSWORD --env production.
  • Local Development Secrets: For convenient local development and testing, create a file named .dev.vars in your project root directory. In this file, define the secrets needed for local development in KEY="VALUE" format. Crucially, you must add the .dev.vars file to .gitignore to prevent accidental commits.
  • Environment-Specific Local Secrets: If your local development needs to simulate different environments (like connecting to different local databases), you can create environment-specific secrets files like .dev.vars.staging. When you run wrangler dev --env staging, Wrangler will prioritize loading variables from this file.
  • Cloudflare Secrets Store: For secrets that need to be shared across multiple Workers, Cloudflare provides a Secrets Store service. It supports creating secrets at the account level and provides more fine-grained role-based access control (RBAC), suitable for more complex enterprise application scenarios.

This layered, environment-aware secrets management approach embodies a mature security model. It provides tailored solutions for different stages of the development lifecycle, effectively preventing the most common secrets exposure pathways.

As application complexity increases, splitting it into multiple independent, collaborating Workers is a common architectural pattern (for example, one handling frontend rendering and another providing backend APIs). Wrangler fully supports this multi-Worker development approach. By passing multiple configuration file paths to the wrangler dev command, you can run multiple Workers simultaneously in a single local development session.

bash

npx wrangler dev -c ./frontend/wrangler.jsonc -c ./api/wrangler.jsonc

In this mode, Wrangler treats the first specified configuration file as the primary Worker (typically listening on localhost:8787), while running other Workers as auxiliary services. This is crucial for testing interactions between Workers in your local environment, such as direct RPC calls through Service Bindings, or testing scenarios where one Worker acts as a producer sending messages to a queue while another Worker acts as a consumer processing them.

Wrangler’s capabilities aren’t limited to Workers – it can also be used to manage backend logic in Cloudflare Pages projects, specifically Pages Functions. The wrangler pages dev <ASSET_DIRECTORY> command can start a local development server that not only serves static assets from Pages projects but also runs all functions in the functions directory, enabling local simulation of full-stack Pages applications.

Pages Functions can also be configured through wrangler.jsonc files, but there are some key differences from pure Worker project configurations. For example, Pages configuration files must include a pages_build_output_dir key to specify the build output directory for static assets, while the Worker-specific main key doesn’t apply. Additionally, the environment inheritance model differs from Workers.

For deployed applications, quickly diagnosing production environment issues is crucial. The wrangler tail command provides a powerful solution for this. It can stream logs from deployed Workers in real-time in your terminal. This includes all console.log output from your Worker code as well as any uncaught exceptions. This is invaluable for debugging sudden issues in production environments because it provides direct insight into your Worker’s real-time behavior. The tail command also supports various filtering options, allowing you to filter the log stream based on conditions like request status, IP address, HTTP method, etc., helping you quickly pinpoint issues in massive log volumes.

This section provides some of the most commonly used Wrangler commands and their usage examples to help you quickly get started with daily development and management tasks.

wrangler init <NAME>: Create a new Worker project using the create-cloudflare-cli (C3) tool.

bash

# Create a new project named "my-worker"
npx wrangler init my-worker

wrangler dev: Start a local development server for testing and debugging your Worker. It supports live reloading and DevTools integration.

bash

# Start local server in project directory
npx wrangler dev

wrangler deploy: Deploy your Worker to Cloudflare’s global network.

bash

# Deploy Worker to production environment
npx wrangler deploy

# Deploy to specific environment named "staging"
npx wrangler deploy --env staging

wrangler kv: Manage Workers KV namespaces and key-value pairs.

bash

# Create a new KV namespace named "MY_KV"
npx wrangler kv:namespace create "MY_KV"

# Write a key-value pair to bound KV namespace
npx wrangler kv:key put --binding=MY_KV "my-key" "my-value"

# Read a key's value
npx wrangler kv:key get --binding=MY_KV "my-key"

# List all keys
npx wrangler kv:key list --binding=MY_KV

wrangler d1: Manage D1 databases.

bash

# Create a new D1 database named "my-database"
npx wrangler d1 create my-database

# Execute a SQL command on the database
npx wrangler d1 execute my-database --command "SELECT * FROM users"

# Create a new database migration file
npx wrangler d1 migrations create my-database "add_users_table"

# Apply all pending migrations
npx wrangler d1 migrations apply my-database

wrangler r2: Manage R2 object storage buckets and objects.

bash

# Create a new R2 bucket named "my-bucket"
npx wrangler r2 bucket create my-bucket

# Upload a file from local to R2
npx wrangler r2 object put my-bucket/image.png --file=./image.png

# Download a file from R2 to local
npx wrangler r2 object get my-bucket/image.png --file=./download.png

wrangler secret: Securely manage environment variables like API keys.

bash

# Create or update a secret named "API_KEY"
npx wrangler secret put API_KEY

# List all configured secrets
npx wrangler secret list

# Delete a secret
npx wrangler secret delete API_KEY

wrangler tail: Stream logs from deployed Workers in real-time for live debugging and monitoring.

bash

# Start listening to logs from Worker named "my-worker"
npx wrangler tail my-worker

# Listen to logs and only show requests with "error" status
npx wrangler tail my-worker --status error

Cloudflare Wrangler has evolved from a simple tool for deploying Cloudflare Workers into the core orchestration engine for the entire Cloudflare developer platform. This guide comprehensively explores Wrangler’s capabilities, from basic installation and project initialization to advanced configuration, resource management, and automated deployment, aiming to provide developers with an authoritative reference.

Analysis shows that Wrangler’s design philosophy and feature evolution demonstrate a deep understanding of modern cloud-native development workflows. Its key advantages include:

Wrangler provides a single command-line interface for managing the complete suite of serverless products, from compute (Workers, Pages) to storage (KV, R2, D1) to messaging (Queues). This unification dramatically simplifies the process of building complex full-stack applications.

By integrating the open-source workerd runtime into wrangler dev, Wrangler provides a local development experience that’s nearly identical to production environments. This not only significantly improves development efficiency through features like live reloading and debugger integration, but more importantly, it increases developer confidence before deployment and reduces errors caused by environment differences.

The wrangler.jsonc configuration file is at the heart of the Wrangler workflow. By version-controlling infrastructure configuration like routes, environment variables, and resource bindings alongside application code, Wrangler encourages and enables GitOps practices. This “configuration as code” approach makes the deployment process predictable, repeatable, and auditable.

Wrangler’s native support for multiple environments (development, staging, production), combined with its carefully designed secrets management workflow, provides developers with a framework that’s both flexible and secure. Particularly the non-inheritance rules for bindings and variables, and the separation between local .dev.vars and remote encrypted secrets, are thoughtful designs aimed at preventing common configuration mistakes.

In summary, mastering Wrangler isn’t just about learning a CLI tool’s commands – it’s about understanding and leveraging Cloudflare’s edge computing paradigm. For developers looking to build high-performance, globally distributed serverless applications on the Cloudflare platform, Wrangler is an indispensable and powerful assistant.

Related Content