Tao
Tao

Complete Guide to Installing and Configuring Docker Engine on Cinnamon-based Linux Systems

To make the installation process easier, the table below maps common Cinnamon-based distributions to the key identifiers needed to set up the official Docker apt repository. The VERSION_CODENAME is especially important for Linux Mint users, who must use the codename of their underlying Ubuntu base.

Distribution Example Version Base OS Base Version Required VERSION_CODENAME
Linux Mint 22 “Wilma” Ubuntu 24.04 LTS noble
Linux Mint 21.3 “Virginia” Ubuntu 22.04 LTS jammy
Ubuntu Cinnamon 24.04 LTS Ubuntu 24.04 LTS noble
Debian GNU/Linux 12 “Bookworm” Debian 12 bookworm
LMDE (Linux Mint Debian Edition) 6 “Faye” Debian 12 bookworm

Before installing Docker, you need to complete some basic system steps, including updating packages, installing necessary tools, and removing any software that might conflict.

Before installing new software, it’s crucial to sync your local package index with the central repositories and upgrade all installed packages to their latest versions. This ensures all system dependencies are up to date and helps prevent potential conflicts.

Run this command:

bash

sudo apt update && sudo apt upgrade -y

Next, install ca-certificates:

bash

sudo apt install ca-certificates curl

Linux distributions sometimes include their own unofficial Docker packages in their default repositories (like docker.io). These packages may be outdated or configured differently from the official Docker version, and can cause conflicts during installation or updates. To ensure a clean installation from official sources, you must first remove any of these conflicting packages.

The packages that need to be removed include docker.io, docker-doc, docker-compose, and podman-docker, as well as older Docker components like containerd and runc (if they were installed separately). The official Docker documentation provides a robust command that attempts to remove all known conflicting packages. It’s safe to run this command even if some or all packages aren’t currently installed on your system, since apt-get will simply report they weren’t found.

Run this cleanup command:

bash

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

Note: This command won’t remove any existing images, containers, volumes, or networks stored in the /var/lib/docker/ directory. If you want to start completely fresh, you’ll need to manually delete this directory after uninstalling the packages.

While Docker can be installed in several ways, the officially recommended method for any Debian-based system (including Ubuntu and Linux Mint) is using Docker’s official apt repository.

Your choice of installation method has a major impact on your system’s long-term security and maintainability. Installing Docker Engine through its official repository is the best approach for these reasons:

  • Lifecycle Management: This method integrates Docker into your system’s native package manager apt. This means Docker Engine updates, including critical security patches and new features, are managed through standard system update commands (like sudo apt upgrade). This automates the maintenance process and ensures your Docker installation stays current and secure over time.

  • Trust and Authenticity: The repository is secured with GPG keys that apt uses to verify the cryptographic signature of every downloaded package. This guarantees the software truly comes from Docker and hasn’t been tampered with or corrupted.

  • Stability and Reliability: In contrast, alternative methods like convenience scripts or manual package installation bypass your system’s package manager. This puts the entire burden of tracking new versions, checking for security vulnerabilities, and performing manual upgrades on the system administrator. This manual process is error-prone and easily overlooked, potentially leaving your system running outdated and insecure Docker versions.

For these reasons, using the official apt repository is the only professionally recommended method for installing Docker on stable development, testing, or production systems.

The following steps provide a complete process for setting up the Docker repository and installing Docker Engine.

This first step establishes trust between your local system and the remote Docker repository. It involves downloading Docker’s official GNU Privacy Guard (GPG) key and storing it in a location where the apt system can access it for package verification.

bash

# Create a directory for storing APT keyrings and set proper permissions to ensure it exists.
sudo install -m 0755 -d /etc/apt/keyrings

# Use curl to download Docker's official GPG key and save it to the new directory.
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc

# Ensure the key file has globally readable permissions so the apt system can use it.
sudo chmod a+r /etc/apt/keyrings/docker.asc

This step registers the Docker repository with the apt package manager by creating a new sources list file. This file tells apt where to look for official Docker packages. The command is built dynamically to ensure it works for your system’s specific architecture and operating system version.

Important: This is the most critical step and requires special attention, especially for Linux Mint users.

For systems running Debian, Ubuntu, or official Ubuntu flavors (like Ubuntu Cinnamon), you can use the following command. It automatically detects the distribution ID (ubuntu or debian) and its codename (noble, jammy, bookworm, etc.).

bash

# This command dynamically builds the repository source file.
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/$(. /etc/os-release && echo "$ID") \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

For Linux Mint users, automatic detection of VERSION_CODENAME will incorrectly use Mint’s codename (like virginia) instead of the required Ubuntu base codename. This will cause errors. You must manually substitute the Ubuntu base codename. Refer to the table in section 1.4 for the correct codename.

For example, for Linux Mint 21.x (based on Ubuntu 22.04 LTS “Jammy Jellyfish”):

bash

# Manually specify 'jammy' as the Ubuntu base codename.
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  jammy stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

After adding the repository, you must update the local package index again so apt knows about the newly available Docker packages.

bash

sudo apt update

Finally, install the latest stable version of Docker Engine and its related components. This command installs several packages:

  • docker-ce: Docker Community Edition, the core daemon.
  • docker-ce-cli: The command-line interface client.
  • containerd.io: A standalone, industry-standard container runtime that manages container lifecycles.
  • docker-buildx-plugin: An extension that enables advanced build features through the BuildKit backend.
  • docker-compose-plugin: Integrates Docker Compose functionality directly into the Docker CLI, allowing use of docker compose commands.

Run the installation command:

bash

sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

sh

#!/bin/bash

# ======================================================================================
#
#                            Docker Engine Installation Script
#
#   For Debian-based systems (Ubuntu, Linux Mint, Debian)
#   Author: Gemini AI
#   Features:
#       1. Remove old or conflicting Docker packages.
#       2. Update system and install necessary dependencies.
#       3. Add Docker official GPG key and APT repository.
#       4. Install latest Docker Engine, containerd, and Docker Compose.
#       5. Display next configuration steps.
#
# ======================================================================================

# Use 'set -e' to ensure the script exits immediately if any command fails.
set -e

echo "--- [Step 1/5] Starting pre-installation system cleanup ---"

# Remove any existing old or conflicting Docker packages
# Use 'sudo' to ensure sufficient permissions
# '|| true' ensures the script doesn't stop due to errors if these packages aren't found
sudo apt-get remove -y docker docker-engine docker.io containerd runc docker-ce docker-ce-cli || true
echo "Old Docker versions cleaned up."
echo ""

echo "--- [Step 2/5] Update package lists and install dependencies ---"
# Update apt package index
sudo apt-get update

# Install necessary packages to allow apt to use repositories over HTTPS
sudo apt-get install -y \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
echo "Dependencies installed."
echo ""

echo "--- [Step 3/5] Add Docker official GPG key and repository ---"

# Create directory for storing GPG keys
sudo install -m 0755 -d /etc/apt/keyrings

# Download Docker official GPG key
# Use curl to download and save to specified directory
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg # Ensure key file is readable

# Set up Docker's APT repository
# This automatically configures the correct repository address based on your system architecture and distribution codename
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
echo "Docker APT repository successfully added."
echo ""

echo "--- [Step 4/5] Install Docker Engine ---"
# Update apt package index again to include packages from the new Docker repository
sudo apt-get update

# Install latest version of Docker Engine, containerd, and Docker Compose
# 'docker-ce' is Docker Community Edition
# 'docker-ce-cli' is the Docker command-line tool
# 'containerd.io' is a container runtime
# 'docker-buildx-plugin' and 'docker-compose-plugin' are plugins for building and orchestration
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
echo "Docker Engine installed successfully!"
echo ""

echo "--- [Step 5/5] Display important post-installation notes ---"
echo ""
echo "✅ Docker has been successfully installed!"
echo ""
echo "Important next steps:"
echo "To avoid having to type 'sudo' every time you use docker commands, run the following command to add your user to the 'docker' group:"
echo ""
echo "   sudo usermod -aG docker \$USER"
echo ""
echo "After running the above command, you need to completely log out and log back in for the group changes to take effect."
echo "After logging back in, you can verify the installation was successful and that you don't need 'sudo' by running:"
echo ""
echo "   docker run hello-world"
echo ""
echo "======================================================================================"

Docker Installation Verification and Post-Installation Configuration on Cinnamon

To create a fully functional, secure, and convenient development environment, you need to perform several post-installation tasks covering installation verification, enabling non-root user access, and managing Docker services.

The first verification is checking that the Docker service (daemon) is properly installed and running. The systemctl status command provides detailed information about the service status.

bash

sudo systemctl status docker

The output should show the service as active (running).

The final test is running the hello-world container. This simple command performs a comprehensive end-to-end test of the entire Docker stack. It instructs the Docker CLI to contact the daemon, which then pulls the lightweight hello-world image from Docker Hub, creates a new container from that image, runs it, and streams its output to the terminal. Successful execution confirms all components are working together properly.

bash

sudo docker run hello-world

If installation was successful, you’ll see a confirmation message starting with “Hello from Docker!”

By default, after installation all docker commands must be prefixed with sudo. This is a security feature because the Docker daemon’s control socket is located at /var/run/docker.sock and is owned by the root user. Attempting to run docker commands as a regular user will result in “permission denied” errors.

While this default behavior is secure, it’s inconvenient for development, can interfere with IDE and tool integration, and encourages unnecessarily running commands with elevated privileges. The standard and secure solution to this problem is adding your user account to the docker group, which is automatically created during installation. Members of this group are granted access to the Docker socket. For any developer, this step should be considered an essential part of the setup process.

  1. Add the current user to the docker group. The ${USER} environment variable automatically resolves to the currently logged-in username.

bash

sudo usermod -aG docker ${USER}
  1. Apply the new group membership. Changes to group membership don’t take effect in the current terminal session. The user must either log out and log back in, or use the newgrp command to start a new shell with updated group permissions.

bash

newgrp docker
  1. Verify non-root access. After applying the group changes, test that docker commands can run without sudo.

bash

docker run hello-world

This should now execute successfully, confirming the user has the necessary permissions.

On modern Linux distributions like Debian and Ubuntu, the systemd init system is responsible for managing background services (daemons). The systemctl command is the primary tool for interacting with systemd to control these services. On Debian and its derivatives, the Docker service is automatically configured to start at system boot. The following commands are essential for managing the Docker daemon’s lifecycle.

This table provides a quick reference for the most commonly used systemctl commands for managing the Docker service.

Command Function
sudo systemctl status docker Check the current running status of the Docker service, showing whether it’s active, inactive, or failed, along with recent log entries.
sudo systemctl start docker Start the Docker service if it’s currently stopped.
sudo systemctl stop docker Stop the Docker service. Active containers will continue running unless the daemon is configured otherwise.
sudo systemctl restart docker Stop and then immediately start the service. This is useful for applying configuration changes made to the daemon.
sudo systemctl enable docker Configure the service to automatically start at each system boot. This is the default behavior on Debian/Ubuntu.
sudo systemctl disable docker Prevent the service from automatically starting at system boot. The service can still be started manually.

While the official repository method is strongly recommended, Docker also provides alternative installation methods for specific scenarios. It’s valuable to understand these alternatives, their intended purposes, and their significant drawbacks.

Docker provides a utility script that automates the installation process. It can be downloaded and executed with a single piped command.

bash

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Use Case: This script is designed for quick testing and development environments where the primary goal is getting Docker running as fast as possible with minimal user interaction.

Warning: The official documentation explicitly advises against using convenience scripts in production environments. The script makes assumptions about your system and runs with root privileges without giving users a chance to review what it will do. Most importantly, it bypasses your system’s package manager, meaning it doesn’t create repository source files. As a result, the Docker installation won’t update with standard system updates, and all future upgrades will require manual intervention.

This method involves manually navigating to the Docker package repository (download.docker.com), downloading individual .deb package files for each component (containerd.io, docker-ce-cli, docker-ce, etc.), and then installing them using the dpkg command.

bash

# Example command after downloading the necessary .deb files
sudo dpkg -i ./containerd.io_<version>_<arch>.deb \
./docker-ce-cli_<version>_<arch>.deb \
./docker-ce_<version>_<arch>.deb \
./docker-buildx-plugin_<version>_<arch>.deb \
./docker-compose-plugin_<version>_<arch>.deb

Use Case: The primary and legitimate use case for this method is installing Docker on air-gapped systems (computers with no direct or indirect internet access). It can also be used to install a specific older version of Docker that’s no longer available in active repositories.

Drawbacks: This is the most labor-intensive and error-prone method

  1. Identify the Base Operating System: The first and most important step is recognizing that the installation process is governed by the underlying Linux distribution (like Ubuntu, Debian) rather than the Cinnamon desktop environment. Using lsb_release -a is essential.

  2. Use the Official Repository: For security, stability, and long-term maintainability reasons, the only recommended installation method for production or stable development systems is through Docker’s official apt repository.

  3. Handle Codenames Correctly: Using the correct VERSION_CODENAME when setting up the repository is crucial. This is especially true for Linux Mint users, who must manually substitute their underlying Ubuntu base codename.

  4. Configure Non-Root Access: The post-installation step of adding the current user to the docker group should be considered mandatory configuration. It enables seamless and secure development workflows without constantly using sudo.

  5. Prefer Docker Engine: For most development, scripting, and server-based tasks on Linux, native Docker Engine is the preferred choice over Docker Desktop due to its performance, efficiency, and direct system integration.

Related Content