Beginner’s Guide to Docker and Containers
Learn what Docker is, why containers matter, and how to get started with containerized development in 2025.
Introduction
In today’s fast-paced software development world, containers have revolutionized how we build, ship, and run applications. If you’re new to the concept of containers and Docker specifically, this guide will walk you through everything you need to know to get started.
Table of Contents
- What Are Containers?
- Docker Fundamentals
- Key Benefits of Using Docker
- Setting Up Docker
- Essential Docker Commands
- Creating Your First Dockerfile
- Building and Running Images
- Docker Compose for Multi-Container Applications
- Docker Best Practices
- Common Issues and Troubleshooting
- Next Steps in Your Docker Journey
What Are Containers?
Containers are lightweight, standalone, executable software packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings. Unlike virtual machines, containers share the host system’s OS kernel but run in isolated user spaces.
Containers vs. Virtual Machines
Containers | Virtual Machines |
---|---|
Share host OS kernel | Run complete OS copy |
Lightweight (MBs) | Heavyweight (GBs) |
Seconds to start | Minutes to start |
Less isolated | More isolated |
Less resource intensive | More resource intensive |
Containers provide a consistent environment across development, testing, and production, eliminating the infamous “it works on my machine” problem that has plagued developers for decades.
Docker Fundamentals
Docker is the most popular containerization platform that makes it easy to create, deploy, and run applications using containers.
Core Docker Concepts
- Docker Engine: The runtime that enables building and running containers
- Docker Image: A read-only template with instructions for creating a container
- Docker Container: A runnable instance of an image
- Docker Registry: A repository for storing and distributing Docker images
- Dockerfile: A text file with instructions to build a Docker image
- Docker Compose: A tool for defining and running multi-container applications
Docker Architecture
Docker uses a client-server architecture:
- Docker Client: The primary way users interact with Docker
- Docker Host: Runs the Docker daemon (dockerd)
- Docker Registry: Stores Docker images (Docker Hub is the public registry)
The flow generally works like this: You use the Docker client to issue commands, the Docker daemon builds, runs, and distributes your containers, and the registry stores your images.
Key Benefits of Using Docker
- Consistency: The same container runs identically across all environments
- Isolation: Applications and dependencies are packaged together without interference
- Portability: Containers run anywhere Docker is installed
- Efficiency: Containers share OS resources without the overhead of virtual machines
- Scalability: Easily scale applications horizontally by spinning up more containers
- Version Control: Track changes to your container images similar to code
- Rapid Deployment: Start, stop, and restart applications in seconds
- Simplified Updates: Replace existing containers with new ones instead of updating in place
Setting Up Docker
Installing Docker
For Windows:
- Download Docker Desktop for Windows
- Follow the installation wizard
- Enable WSL 2 feature if prompted
For macOS:
- Download Docker Desktop for Mac
- Drag Docker to your Applications folder
- Open Docker (it will ask for privileges)
For Linux (Ubuntu):
# Update package index
sudo apt update
# Install prerequisites
sudo apt install apt-transport-https ca-certificates curl software-properties-common
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Set up the stable repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install Docker Engine
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
# Add your user to the docker group to avoid using sudo
sudo usermod -aG docker $USER
Verifying Installation
After installation, open a terminal or command prompt and run:
docker --version
docker run hello-world
If Docker is properly installed, you’ll see the version number and a message from the “hello-world” container.
Essential Docker Commands
Here are some fundamental Docker commands to get you started:
Image Commands
docker images
: List all local imagesdocker pull <image>
: Download an image from a registrydocker rmi <image>
: Remove a local imagedocker build -t <name:tag> .
: Build an image from a Dockerfile
Container Commands
docker ps
: List running containersdocker ps -a
: List all containers (running and stopped)docker run <image>
: Create and start a containerdocker start <container>
: Start a stopped containerdocker stop <container>
: Stop a running containerdocker rm <container>
: Remove a containerdocker logs <container>
: View container logsdocker exec -it <container> <command>
: Run a command inside a running container
System Commands
docker system df
: Show Docker disk usagedocker system prune
: Remove unused data (stopped containers, unused networks, dangling images)
Creating Your First Dockerfile
A Dockerfile is a text document containing instructions to build a Docker image. Here’s a simple example for a Node.js application:
# Use an official Node.js runtime as the base image
FROM node:18-alpine
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the application
CMD ["npm", "start"]
Dockerfile Instructions Explained
FROM
: Specifies the base image to useWORKDIR
: Sets the working directory for subsequent instructionsCOPY
: Copies files from the host to the containerRUN
: Executes commands during image buildEXPOSE
: Documents which ports the container listens onCMD
: Defines the default command to run when the container starts
Building and Running Images
Building an Image
To build an image from your Dockerfile:
# Navigate to the directory containing your Dockerfile
cd /path/to/your/app
# Build the image
docker build -t myapp:1.0 .
The -t
flag tags your image with a name and optional version.
Running a Container
Once you’ve built your image, you can run it as a container:
# Basic run
docker run myapp:1.0
# Run in detached mode (background)
docker run -d myapp:1.0
# Map a port from host to container
docker run -p 8080:3000 myapp:1.0
# Mount a volume
docker run -v /host/path:/container/path myapp:1.0
# Set environment variables
docker run -e NODE_ENV=production myapp:1.0
# Name your container
docker run --name my-container myapp:1.0
# Combining options
docker run -d --name my-api -p 8080:3000 -e NODE_ENV=production myapp:1.0
Docker Compose for Multi-Container Applications
Docker Compose simplifies the process of running multi-container applications. It uses a YAML file to define services, networks, and volumes.
Example docker-compose.yml
version: '3.8'
services:
web:
build: ./web
ports:
- "8080:80"
depends_on:
- api
environment:
- API_URL=http://api:3000
api:
build: ./api
ports:
- "3000:3000"
depends_on:
- db
environment:
- DB_HOST=db
- DB_USER=postgres
- DB_PASSWORD=secret
- DB_NAME=mydb
db:
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=mydb
volumes:
postgres_data:
Running Docker Compose
# Start all services
docker-compose up
# Start in detached mode
docker-compose up -d
# Stop all services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View logs
docker-compose logs
# Scale a specific service
docker-compose up -d --scale api=3
Docker Best Practices
1. Use Official Base Images
Starting with official images ensures you’re building on secure, well-maintained foundations.
# Good
FROM node:18-alpine
# Avoid
FROM ubuntu:latest
RUN apt-get update && apt-get install -y nodejs npm
2. Minimize Layers
Each instruction in a Dockerfile creates a new layer. Combining related commands reduces image size.
# Good
RUN apt-get update && \
apt-get install -y package1 package2 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Avoid
RUN apt-get update
RUN apt-get install -y package1
RUN apt-get install -y package2
3. Use .dockerignore
Create a .dockerignore
file to exclude files not needed in your image:
node_modules
npm-debug.log
.git
.gitignore
.env
*.md
4. Multi-Stage Builds
Multi-stage builds separate build-time dependencies from runtime dependencies:
# Build stage
FROM node:18 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/package*.json ./
RUN npm install --production
EXPOSE 3000
CMD ["npm", "start"]
5. Run as Non-Root User
Running containers as non-root improves security:
# Create a user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Switch to that user
USER appuser
# Continue with the rest of your Dockerfile
6. Use Specific Tags
Avoid latest
tags for production as they can lead to inconsistent builds:
# Good
FROM node:18.16.0-alpine3.17
# Avoid
FROM node:latest
Common Issues and Troubleshooting
Container Not Starting
Check the logs:
docker logs <container-id>
Port Conflicts
If you see an error like “port is already allocated”, change the host port:
docker run -p 8081:3000 myapp:1.0
Container Exiting Immediately
Make sure your application doesn’t exit. For debugging, you can:
docker run -it --entrypoint /bin/sh myapp:1.0
Out of Disk Space
Clear unused Docker resources:
docker system prune -a
Permission Denied
Check if you need to add your user to the docker group:
sudo usermod -aG docker $USER
# Log out and back in for changes to take effect
Next Steps in Your Docker Journey
After mastering the basics, consider exploring:
- Docker Swarm or Kubernetes for container orchestration
- Docker Registry for private image storage
- CI/CD pipelines with Docker
- Container monitoring tools
- Docker security best practices
- Docker networking in depth
Conclusion
Docker and containers have transformed how we develop, deploy, and run applications. By providing consistent environments across the development lifecycle, containers solve many traditional challenges in software development.
This guide has covered the fundamentals of Docker and containers, but there’s much more to explore. As containers continue to evolve in 2025 and beyond, investing time in mastering these technologies will pay dividends in your development career.
Resources
Official Documentation
Learning Platforms
Books
- “Docker Deep Dive” by Nigel Poulton
- “Docker in Action” by Jeff Nickoloff
Communities
Happy containerizing!