Tao
Tao

Contents

Mastering podman run: Your Complete Guide to Modern Container Management

Contents

At the heart of modern container technology is one simple thing: turning a static container image into a live, running process. For the Podman container engine, this magic happens with the podman run command. Podman is a daemon-less, open-source tool built specifically for Linux that uses Open Container Initiative (OCI) standards to find, run, build, and deploy applications.

The podman run command is basically the Swiss Army knife of container operations - it’s the foundation for pretty much everything you’ll do with containers. If you’ve used Docker before, you’ll find podman run works very similarly, but under the hood it’s designed much better. What makes Podman special? It’s way more secure and plays nicer with your system. For example, it doesn’t need a background daemon running all the time (which eliminates single points of failure), and regular users can run containers without needing admin privileges - that’s a huge security win.

This guide goes way beyond just listing command flags and options. We’re going to dive deep into how podman run works, the architecture behind it, and how to use it in real-world scenarios. Whether you’re a system admin, DevOps engineer, or developer, you’ll learn how to harness the full power of podman run for secure, efficient, and scalable container deployments. We’ll cover everything from basic daily operations to advanced security hardening with SELinux and AppArmor, practical application deployment, and integrating containers as first-class services in modern Linux systems.

Before we get into the advanced stuff, let’s make sure we understand what the podman run command is all about - its purpose, syntax, and basic mechanics. This command is the most feature-rich tool in the Podman toolkit because it handles the entire setup and startup process for new containers, giving you complete control over the runtime environment.

The main job of podman run is to create a new, writable container layer on top of a specified image and run a command inside that container. This creates a process with its own isolated environment - including its own filesystem, network stack, and process tree. While the source image might define default behaviors (like what command to run or which network ports to expose), podman run lets you override these defaults at runtime, giving you fine-grained control over how the container is configured.

Podman and its podman run command work within the OCI-compatible ecosystem. This means it relies on standard container runtimes like runc or crun to talk to the Linux kernel and create running container processes. This commitment to open standards ensures that containers created by Podman are virtually identical to those created by other container engines like Docker or CRI-O, promoting interoperability and preventing vendor lock-in.

The command structure is designed to be flexible, accommodating various configurations through a consistent syntax. Here’s the basic structure:

bash

podman run [options] image [command [args...]]

Each part has its own purpose:

  • podman run: The base command that kicks off the container creation and execution process
  • [options]: A bunch of optional flags that modify how the container is configured and runs. These flags control networking, storage, security, resource limits, and execution modes. Because this command does so much, it supports more options than any other Podman command
  • image: The name of the container image that serves as the blueprint for the new container. This can be a local image or one from a remote registry. Images can be specified using the transport:path format, where docker:// (for remote registries) is the default transport
  • [command [args…]]: An optional command and its arguments to execute inside the container. If provided, this overrides the default CMD instruction specified in the image’s Containerfile or Dockerfile

The podman run command wraps up a series of actions that make up the initial stages of a container’s lifecycle. This automated workflow simplifies deployment by handling multiple steps in a single operation.

First, Podman checks if the specified image exists in local storage. If it’s not found, podman run automatically contacts the container registries configured in the system’s registries.conf file to locate and pull that image and all its dependencies. This behavior ensures that necessary components are available without requiring a separate podman pull command.

Once the image is available locally, Podman creates a new container based on it. During this creation process, it sets up the container’s isolated environment. This includes automatically generating several key files inside the container, such as /etc/hosts, /etc/hostname, and /etc/resolv.conf, which are used for network management and are typically based on the host’s configuration. Additionally, a file is created at /run/.containerenv that provides a standard way for processes inside the container to detect whether they’re running in a containerized environment.

After the environment is configured, podman run executes the specified command inside the new container. The simplest demonstration of the entire workflow is using the hello-world image:

bash

$ podman run hello-world

Hello from Podman!
This message shows that your installation appears to be working correctly.

In this example, Podman performs these steps:

  1. Checks for the hello-world image locally. If not found, pulls it from a public registry (like Docker Hub)
  2. Creates a new container from that image
  3. Runs the executable inside the container, which prints the message to the console
  4. The container then exits since it’s a simple, short-lived task

This sequence shows that the podman run command isn’t just an execution trigger - it’s like a mini orchestrator. It manages dependency resolution (image pulling), environment configuration (network files, environment variables), and process startup in a single atomic operation. This perspective helps explain why the command has such a broad set of options; it’s not just “running” a process, but carefully constructing the isolated world that process will live in. This comprehensive control capability is exactly what makes podman run the foundational tool for all container-based workloads.

While podman run offers tons of options, there’s a core subset that forms the foundation of day-to-day container management. Mastering these basic flags is crucial for performing essential operations like running interactive sessions, managing background services, ensuring data persistence, and configuring container networking and environments.

How you run your container is one of the most basic choices you’ll make as an operator.

For tasks that need direct user interaction, like accessing a shell or debugging applications, interactive mode is essential. This is achieved by combining two flags: -i (or --interactive), which keeps the container’s standard input (STDIN) open; and -t (or --tty), which allocates a pseudo-terminal. This combination allows your terminal to connect directly to the container’s process.

A common use case is starting a bash shell inside an Ubuntu container:

bash

$ podman run -it ubuntu bash
root@f8d05968b4a2:/# 

For long-running services like web servers, databases, or APIs, you need to run containers in the background. The -d (or --detach) flag does exactly that, telling Podman to start the container and then detach from the console, freeing up your terminal for other commands. When running in detached mode, podman run prints out the new container’s unique ID. You can check the status of these background containers using the podman ps command.

When a container is running, you can attach to its standard streams using podman attach. For interactive containers started with -it, users can detach from the session using the ctrl-p,ctrl-q key combination without stopping the container. This key combination can be configured through the --detach-keys option.

Properly managing container identity and lifecycle is key to maintaining a clean, organized host system.

By default, Podman assigns each new container a randomly generated name (like laughing_bob). While functional, these names aren’t memorable. The --name flag lets you assign a human-readable name, which simplifies subsequent management commands like podman stop, podman logs, or podman rm.

bash

podman run -d --name web_server nginx

Many containers are created for ephemeral, temporary tasks, such as running test suites or one-off scripts. To prevent these containers from taking up space on the host after they exit, you should use the --rm flag. This option automatically removes the container’s filesystem immediately after the container’s main process terminates, ensuring no leftovers remain. This is a key best practice for maintaining system hygiene.

By default, containers are created with their own isolated network stack, meaning they can’t be accessed from the host or external networks. Port mapping is the mechanism used to expose containerized applications.

The -p (or --publish) flag maps a port on the host to a port inside the container, with the format -p hostPort:containerPort. This tells Podman to forward any network traffic reaching the specified hostPort to the container’s containerPort. For example, to run an Nginx web server and make it accessible on the host’s port 8080:

bash

podman run -d --name web -p 8080:80 nginx

Now, you can access the application inside the container by navigating to http://localhost:8080 in a web browser or using tools like curl.

The -P (or --publish-all) flag provides a convenient shortcut that publishes all exposed ports in the container image to random high ports on the host. This is useful for dynamically allocating ports without worrying about conflicts.

Container filesystems are ephemeral by nature; any data written inside a container is lost when the container is removed. To make data persist beyond the lifecycle of a single container, you must use volumes or bind mounts.

Bind mounts directly map files or directories from the host filesystem into the container’s filesystem. This is achieved through the -v (or --volume) flag with the syntax -v /path/on/host:/path/in/container. Bind mounts are great for providing source code, configuration files, or other host-side assets to containers.

While bind mounts are tied to specific paths on the host, named volumes are storage entities managed directly by Podman. They’re the preferred method for persisting application data (like database files) because they decouple the data from the host’s filesystem structure. To use a named volume, just provide a name instead of a host path:

bash

podman run -d --name my_database -v db_data:/var/lib/mysql/data mysql

If a volume named db_data doesn’t exist, Podman will create it. This volume can then be reused by other containers, facilitating upgrades and data migration.

The -v and --mount flags can be appended with options to control mount behavior. The most common is :ro, indicating read-only access, which prevents the container from modifying the mounted content. Other key options include SELinux relabeling flags :z and :Z, which are crucial for allowing containers to access host files on SELinux-enabled systems.

You can customize the runtime environment for applications inside containers using several key options.

The -e (or --env) flag sets an environment variable inside the container. This is commonly used to pass configuration parameters, such as database credentials or application modes. For example, configuring a MySQL container:

bash

podman run -d --name db -e MYSQL_ROOT_PASSWORD=secretpassword -e MYSQL_DATABASE=myapp mysql:5.7

For managing lots of variables, you can use the --env-file flag to load them from a simple line-delimited text file.

Entry Point and Commands (–entrypoint, [command])

It’s important to understand the difference between an image’s ENTRYPOINT and CMD. ENTRYPOINT is the main executable that runs when the container starts, while CMD provides default arguments to that executable.

The podman run --entrypoint flag allows operators to override the image’s default ENTRYPOINT. Any arguments provided after the image name in the podman run command will override the image’s default CMD.

The -w (or --workdir) flag sets the working directory where commands will be executed inside the container. This is useful for applications that expect to run from a specific location in the filesystem.

To solidify these fundamentals, here’s a handy reference table for these basic flags:

Flag Purpose Example
-d, --detach Run container in background (detached mode) podman run -d nginx
-it Create an interactive session with a pseudo-TTY podman run -it ubuntu bash
--name Assign a custom name to the container podman run --name my-db postgres
--rm Automatically remove container when it exits podman run --rm hello-world
-p, --publish Map host port to container port (host:container) podman run -p 8080:80 httpd
-v, --volume Mount host path or named volume into container podman run -v ./config:/etc/app:ro my-app
-e, --env Set environment variable inside container podman run -e APP_MODE=production my-app

Understanding the podman run command isn’t just about memorizing its options; it requires a deep understanding of the architectural philosophy that sets Podman apart from other container engines, especially Docker. Podman’s daemon-less design isn’t just a technical implementation detail - it’s the foundational principle behind its major advantages in security, auditing, and system integration.

The most significant architectural difference between Podman and Docker lies in the presence or absence of a central daemon.

Docker uses a client-server architecture. When users execute commands like docker run, the docker CLI client doesn’t actually run the containers itself. Instead, it sends REST API requests over a Unix socket to a long-running background process called the Docker daemon (dockerd). This daemon typically runs with root privileges and is responsible for managing the entire container lifecycle: pulling images, creating and starting containers, managing networks, and handling storage. The daemon is a single, monolithic control point and a potential single point of failure.

Podman fundamentally rejects this model. It operates without a persistent, privileged daemon. When users execute podman run, the command directly interacts with kernel APIs and OCI runtimes (runc) to create containers. Container processes become direct children of the podman command that started them, following the traditional Unix process fork/exec model. This eliminates the intermediate daemon, simplifies the architecture, and establishes a more direct relationship between users, commands, and containers.

This architectural divergence has profound implications for how containers are managed and secured.

The absence of a root-owned daemon is Podman’s biggest security advantage. In the Docker model, the daemon’s root privileges represent a massive attack surface; if an attacker can compromise the daemon, they effectively gain root control over the host system. Because Podman doesn’t have such a daemon, this entire class of vulnerabilities is eliminated. In rootless Podman environments, container escapes are limited to the restricted privileges of the user who executed the podman run command, dramatically reducing the potential for system-wide damage.

The daemon-less model provides clearer audit trails. On Linux systems using audit daemons (auditd), operations performed by Docker containers are logged as originating from the dockerd process rather than the user who started the container. This makes it extremely difficult to trace potentially malicious activity back to specific user accounts. With Podman, since containers are direct children of user commands, auditd correctly attributes all operations to the user who invoked podman run, ensuring accountability and simplifying forensic analysis.

By abandoning proprietary daemons for lifecycle management, Podman can integrate directly with standard, robust Linux system components. Most importantly, this includes systemd, the universal service manager for modern Linux distributions. Rather than relying on internal mechanisms to handle container restarts and background services, Podman can generate systemd service units. This allows containers to be managed like any other native system service, using familiar commands like systemctl start, systemctl enable, and systemctl status. This approach treats containers as first-class citizens of the operating system rather than objects managed by a separate, monolithic application.

While not always a primary consideration, Podman’s direct management model can result in faster container startup times. By eliminating the API call overhead between client and daemon, containers can sometimes be instantiated more quickly.

A common question about the daemon-less model is how detached containers (-d) persist after the initial podman run command exits. The answer lies in a small, lightweight utility called conmon (container monitor).

When podman run is executed, it doesn’t directly start the OCI runtime (runc). Instead, it starts a dedicated conmon process for the container. This conmon process then calls runc to create the container and subsequently acts as the container’s parent process.

conmon’s responsibilities include:

  • Monitoring the container’s main process and capturing its exit code
  • Handling the container’s logging by relaying its standard output and error streams
  • Managing pseudo-terminals (TTY) for interactive sessions, enabling podman attach to work
  • Responding to commands like podman kill by sending appropriate signals to the container process

By delegating these monitoring tasks to minimal, per-container conmon processes, the main podman CLI can exit, enabling detached operation without requiring a single, heavy, centralized daemon. This approach clearly reflects Unix philosophy: using small, specialized tools that do one thing well, in contrast to Docker’s monolithic daemon that handles all these tasks for all containers. This modularity is a key driver of Podman’s design, shifting the paradigm from a single, all-encompassing tool to a suite of interoperable components (like Podman, Buildah, and Skopeo) that integrate natively with the host operating system.

The following table summarizes the key differences between podman run and docker run workflows driven by their core architectural differences:

Feature podman run docker run
Architecture Daemon-less. podman CLI directly forks container processes using conmon and OCI runtime. Client-server. docker CLI communicates with central dockerd daemon via REST API.
Default User Defaults to rootless. Runs containers with current non-privileged user’s permissions. Defaults to rooted. dockerd daemon and its container processes run as root user.
Security Model Smaller attack surface. Container escapes limited to host user’s privileges. Audit trails clearly identify users. Privileged daemon is a massive attack surface and single point of failure. Audit trails are obfuscated, pointing to daemon rather than users.
Service Management Native systemd integration, managing background containers as system services. All service management handled internally by daemon (e.g., restart policies).
Pods Has native Pod concept for grouping containers that share resources, mimicking Kubernetes model. Lacks native Pod concept; requires external tools like Docker Compose or orchestrators like Swarm.

Podman’s most acclaimed and impactful security feature is its first-class support for rootless containers. This capability allows a standard, non-privileged user to create, run, and manage a complete container ecosystem without sudo or any elevated privileges on the host system. This isn’t just a convenience; it represents a fundamental shift in container security posture, moving from a paradigm of restricting privileges to one of operating without privileges from the start.

Rootless containers are containers where both the container engine (Podman) and the container processes execute without root privileges. The primary motivation for this design is mitigating container escape vulnerabilities. In traditional, rooted container environments, if a malicious process successfully escapes container restrictions, it immediately gains root access on the host system, leading to complete system compromise. With rootless containers, the “blast radius” of such escapes is dramatically reduced. An escaped process will only have the limited privileges of the non-privileged user who started the container, preventing it from modifying critical system files, accessing other users’ data, or installing malware system-wide.

The core technology enabling rootless containers is Linux user namespaces. User namespaces create a mapping between a range of user IDs (UIDs) and group IDs (GIDs) on the host and a separate set of UIDs and GIDs within the namespace. This mapping is configured in two system files: /etc/subuid and /etc/subgid. These files specify a starting subordinate ID and the number of available IDs that each user is allowed to use in namespaces they create.

For example, an entry testuser:100000:65536 in /etc/subuid grants user testuser a pool of 65536 UIDs starting from UID 100000 on the host.

When testuser runs a rootless container, Podman uses this mapping to create the user namespace:

  • The user’s UID on the host (e.g., testuser’s UID 1001) is mapped to UID 0 (root) within the container namespace
  • UID 1 inside the container is mapped to UID 100000 on the host
  • UID 2 inside the container is mapped to UID 100001 on the host, and so on

This clever remapping means that processes running as root inside the container are actually running as the non-privileged testuser on the host. This can be verified by checking the process list on the host. This mechanism provides the illusion of root privileges inside the container for software that needs it, while never granting actual root privileges on the host system.

The newuidmap and newgidmap utilities, typically provided by the uidmap or shadow-utils packages, are required for Podman to perform this mapping operation.

The benefits of the rootless model are significant, but it also introduces some practical limitations that users must be aware of.

  • Significantly Enhanced Security: As mentioned, this is the primary benefit, preventing privilege escalation attacks
  • Multi-user Isolation: Rootless containers allow multiple non-privileged users to run containers simultaneously on the same machine without interfering with each other. Each user’s containers and images are stored in their own home directory (typically $HOME/.local/share/containers/storage), providing natural isolation. This is especially valuable in high-performance computing (HPC) and shared development environments
  • Developer Flexibility: It enables developers who don’t have system root access to use containers in their environment for building and testing applications
  • Privileged Ports: By kernel design, non-privileged processes cannot bind to network ports below 1024. This means rootless containers cannot expose services on standard ports like 80 (HTTP) or 443 (HTTPS) by default. A common workaround is modifying a system-wide kernel parameter: sudo sysctl -w net.ipv4.ip_unprivileged_port_start=80. However, it’s important to understand that this change allows all non-privileged applications on the system (not just Podman) to bind to these ports, which may have broader security implications

  • Host Filesystem Access: Rootless containers are constrained by the standard Linux permissions of the user who started them. They cannot read or write to any directories on the host that the user themselves cannot access

  • Network Performance: Historically, rootless networking relied on slirp4netns, a userspace networking implementation that introduces performance overhead compared to the kernel-level networking used by rooted containers. Modern Podman installations default to pasta, which significantly improves performance, but understanding this distinction is important for troubleshooting older setups or network issues

  • Resource Controls (cgroups): On systems using legacy cgroups v1, the ability to delegate resource controls is limited, meaning resource constraints like --memory or --cpus may not be fully enforced for rootless containers. Full support for rootless resource management requires a system running cgroups v2

This tutorial demonstrates the principles and limitations of rootless container execution.

First, ensure the current user has entries in /etc/subuid and /etc/subgid. If not, they must be added by a system administrator.

bash

grep $(whoami) /etc/subuid /etc/subgid

As a non-root user, run an Nginx container, mapping its internal port 80 to a high port (>= 1024) on the host. This should succeed without any special configuration.

bash

$ podman run -d --name rootless_web -p 8080:80 docker.io/library/nginx
# Verify it's running
$ podman ps
# Test connectivity
$ curl http://localhost:8080 

This demonstrates that basic rootless operation is straightforward.

Now, try running the same container but mapping to the standard HTTP port 80. This will fail with a permission error.

bash

$ podman run -d --name failed_web -p 80:80 docker.io/library/nginx
Error: rootlessport cannot expose privileged port 80, you can add 'net.ipv4.ip_unprivileged_port_start=80' to /etc/sysctl.conf

This error directly illustrates the privileged port limitation.

To allow binding to port 80, a privileged user must modify the system’s sysctl configuration.

bash

sudo sysctl net.ipv4.ip_unprivileged_port_start=80

With the workaround applied, the non-root user can now successfully run containers on port 80.

bash

$ podman run -d --name successful_web -p 80:80 docker.io/library/nginx
# Verify it's running and accessible on port 80
$ curl http://localhost:80

This hands-on exercise emphasizes that while rootless operation is the preferred and default secure mode, interactions with privileged system resources still require deliberate, often system-wide configuration changes.

The podman run command is far more than a simple instruction for executing containers; it’s the heart of a sophisticated, secure, and system-integrated approach to containerization. By going beyond its surface-level command syntax and embracing its underlying architecture and integration tools, operators can build, deploy, and manage containerized applications with a level of security, stability, and system cohesion that sets new standards for modern data centers and cloud-native environments.

Podman’s daemon-less and rootless-first architecture, deep integration with mandatory access control frameworks, and native integration with systemd and Kubernetes make it a powerful tool for modern container management. Mastering the podman run command and its rich set of options provides a solid foundation for secure, efficient, and scalable container deployment.

Want to dive deeper into Podman’s advanced features and configuration options? Check out these official resources:

These resources will give you the latest feature introductions, detailed configuration instructions, and community support to help you fully leverage Podman’s powerful capabilities in production environments.

Related Content