Container Fundamentals

A container is an encapsulated process that includes all required runtime dependencies. Unlike a virtual machine, a container shares the host kernel but isolates its filesystem, network, and process tree using Linux kernel primitives.

🧱

Namespaces

Kernel feature that isolates processes — each container gets its own PID, network, mount, UTS, IPC, and user namespaces, providing the illusion of a dedicated machine.

⚙️

Control Groups (cgroups)

Kernel mechanism for resource management. Limits and tracks CPU time, memory, disk I/O, and network bandwidth allocated to containers.

📦

Container Image

An immutable, layered archive defining an application and its libraries. Read-only layers are stacked via a union filesystem; a writable layer is added at runtime.

🏃

Container Instance

A running process created from a container image. Analogous to an object instantiated from a class. Many instances can run from a single image simultaneously.

📋

OCI Standard

The Open Container Initiative defines image-spec and runtime-spec so any compliant engine (Podman, Docker, CRI-O) can run the same images interchangeably.

🔄

Ephemeral by Default

Container engines remove the writable layer when a container is deleted. Any data written inside a container is lost unless explicitly persisted via a volume or bind mount.

Containers vs. Virtual Machines

AttributeVirtual MachineContainer
Machine-level componentHypervisor (KVM, VMware, Hyper-V)Container engine (Podman, CRI-O)
Virtualization levelFully virtualized environment + own kernelShared host kernel; isolated user-space only
Typical sizeGigabytesMegabytes
Startup timeMinutesMilliseconds to seconds
PortabilityUsually tied to same hypervisorAny OCI-compliant engine
Best forFull OS isolation, non-Linux workloadsMicroservices, scale-out applications

Podman — Container Engine

Podman (Pod Manager) is a daemonless, rootless-capable OCI container engine from Red Hat. Unlike Docker, it does not require a background daemon — each podman invocation runs as a regular process, reducing attack surface.

Checking the Installation

verify podman
$ podman -v # Output: podman version 4.x.x $ podman info # Shows host OS, kernel, storage driver, registry configuration

Running Your First Container

run containers
Run a one-shot command inside a RHEL container (image is pulled automatically if not local)
$ podman run registry.redhat.io/rhel7/rhel:7.9 echo 'Red Hat' # Run interactively with a bash shell $ podman run -it registry.redhat.io/rhel7/rhel:7.9 /bin/bash # Run detached (background) with port mapping host:container $ podman run -d -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24:latest # Run with auto-remove on exit, environment variable, and custom name $ podman run --rm --name myapp -e NAME='Red Hat' registry.redhat.io/rhel7/rhel:7.9 printenv NAME # Bind only to localhost (prevents external access) $ podman run -p 127.0.0.1:8075:80 my-app

Managing Container Lifecycle

CommandDescription
podman psList running containers
podman ps --allList all containers including stopped ones
podman ps --all --format=jsonOutput container list as JSON
podman stop <name|id>Send SIGTERM, then SIGKILL after timeout (default 10 s)
podman stop -t 30 <name>Graceful stop with custom timeout
podman kill <name>Send SIGKILL immediately
podman start <name>Restart a stopped container
podman restart <name>Stop then start a container
podman rm <name|id>Remove a stopped container
podman rm -f <name>Force-remove a running container
podman pause / unpause <name>Freeze / resume all processes in a container using cgroup freezer

Inspecting and Interacting with Containers

inspect & exec
# Run a command in a running container $ podman exec -it myapp /bin/bash $ podman exec myapp cat /etc/hostname # Get full JSON metadata (IP, mounts, env vars, etc.) $ podman inspect myapp # Extract a specific field using Go template $ podman inspect myapp -f '{{.NetworkSettings.Networks.apps.IPAddress}}' # Copy files between host and container $ podman cp myapp:/etc/hosts /tmp/hosts $ podman cp /tmp/config.yaml myapp:/app/config.yaml # Show port mappings for a container $ podman port myapp $ podman port --all # Tail container logs (follow mode) $ podman logs -f myapp $ podman logs --tail 50 myapp

Rootless Containers

Rootless Podman runs containers without root privileges. The user namespace maps the container's internal root (UID 0) to the current unprivileged host user. This limits the blast radius of container escapes.

rootless podman
# Run a container as your unprivileged user (no sudo needed) $ podman run --userns=auto registry.access.redhat.com/ubi8/ubi-minimal bash # View the user namespace mapping $ podman unshare cat /proc/self/uid_map # Storage location for rootless images # Default: ~/.local/share/containers/storage

Container Images & Registries

A container image is a read-only, layered archive (OCI image-spec). Each layer is a diff of filesystem changes. Layers are cached and shared between images to save disk space.

Image Naming Convention

[registry/][namespace/]image-name[:tag][@digest] Examples: registry.redhat.io/rhel9/rhel:9.2 ← Red Hat Registry, versioned tag registry.access.redhat.com/ubi8:latest ← UBI image, floating tag quay.io/myorg/myapp:v2.1.3 ← Quay.io, semantic version myapp@sha256:a1b2c3... ← Pinned by digest (immutable)

Pulling, Listing, Tagging, Removing Images

image management
# Pull an image from a registry $ podman pull registry.redhat.io/rhel7/rhel:7.9 # List all local images $ podman images # Tag a local image (does not copy data, creates an alias) $ podman tag myapp:latest quay.io/myorg/myapp:v2.1.3 # Push image to a remote registry $ podman push quay.io/myorg/myapp:v2.1.3 # Inspect image metadata (layers, env vars, entrypoint, etc.) $ podman inspect registry.redhat.io/ubi8/ubi-minimal:latest # Remove a local image $ podman rmi myapp:latest # Remove all unused images $ podman image prune # Search for images in registries $ podman search ubi8

Registry Authentication

registry auth
# Log in to the Red Hat Registry (prompts for credentials) $ podman login registry.redhat.io # Log in to Quay.io $ podman login quay.io # Log in to the internal OpenShift registry using oc token $ podman login -u $(oc whoami) -p $(oc whoami -t) \ default-route-openshift-image-registry.apps.ocp4.example.com # Log out of a specific registry $ podman logout quay.io # Log out of all registries $ podman logout --all

Skopeo — Registry Manipulation Without Pulling

Skopeo works with container images at the registry level, without needing a running container engine. It inspects, copies, and deletes images across registries efficiently.

skopeo
# Inspect image metadata without downloading it $ skopeo inspect docker://registry.redhat.io/ubi8/ubi-minimal:latest # Inspect with credentials $ skopeo inspect --creds user:password \ docker://registry.redhat.io/rhscl/postgresql-96-rhel7 # Copy image between registries (no intermediate local storage needed) $ skopeo copy --dest-tls-verify=false \ docker://registry.redhat.io/ubi8/ubi-minimal:latest \ docker://registry.example.com/myorg/ubi-minimal:latest # Copy from local storage to remote registry $ skopeo copy --dest-tls-verify=false \ containers-storage:myimage \ docker://registry.example.com/myorg/myimage # Copy between two private registries with different credentials $ skopeo copy --src-creds=user1:pass1 --dest-creds=user2:pass2 \ docker://src-registry.example.com/myimage \ docker://dest-registry.example.com/myimage # Delete an image from a registry $ skopeo delete docker://registry.example.com/myorg/old-image:tag

Building Custom Images — Containerfile

A Containerfile (compatible with Dockerfile syntax) is a text recipe for building a container image. Each instruction creates a new immutable layer.

Essential Containerfile Instructions

InstructionPurposeExample
FROMBase image to build upon. Every Containerfile starts here.FROM ubi8/ubi-minimal:8.8
RUNExecute a shell command during build (creates a layer).RUN dnf install -y python3 && dnf clean all
COPYCopy files from build context into the image.COPY app.py /app/app.py
ADDLike COPY but also handles URLs and auto-extracts tar archives.ADD app.tar.gz /app/
WORKDIRSet working directory for subsequent instructions.WORKDIR /app
ENVSet environment variables available at build and runtime.ENV PORT=8080 DEBUG=false
ARGBuild-time variable (not available at runtime unless set in ENV too).ARG VERSION=1.0
EXPOSEDocuments which port the container listens on (informational only).EXPOSE 8080
USERSwitch to a non-root user for all subsequent instructions and the final container.USER 1001
ENTRYPOINTMain command that always runs (use exec form JSON array).ENTRYPOINT ["python3", "-m", "http.server"]
CMDDefault arguments for ENTRYPOINT, or the default command if no ENTRYPOINT.CMD ["8080"]
LABELAttach key-value metadata to the image.LABEL version="1.0" maintainer="team@example.com"
VOLUMEDeclare a mount point for persistent storage.VOLUME /data
HEALTHCHECKDefine a command to probe container health.HEALTHCHECK CMD curl -f http://localhost:8080/health || exit 1

Multi-Stage Build Example

Multi-stage builds produce small production images by separating the build environment from the runtime environment.

CONTAINERFILE — multi-stage Go application
# ── Stage 1: Build ────────────────────────────────────── FROM registry.access.redhat.com/ubi8/go-toolset:1.17 AS build WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . RUN go build -o myapp . # ── Stage 2: Runtime (only the binary, no build tools) ── FROM registry.access.redhat.com/ubi8/ubi-minimal:latest WORKDIR /app # Copy only the compiled binary from the build stage COPY --from=build /app/myapp . EXPOSE 8080 USER 1001 ENTRYPOINT ["./myapp"]

Building Images with Podman

podman build
# Build from current directory Containerfile $ podman build -t myapp:latest . # Build from a specific Containerfile in a different context $ podman build -f Containerfile.prod -t myapp:prod /path/to/context # Pass build arguments $ podman build --build-arg VERSION=2.0 -t myapp:2.0 . # Squash all layers into one (reduces image size) $ podman build --squash -t myapp:squashed . # Multi-platform build $ podman build --platform linux/amd64,linux/arm64 -t myapp:multi . # View image history (layers) $ podman history myapp:latest
OpenShift Note

OpenShift requires containers to run as non-root. Always add USER 1001 (or any UID above 1000) as the last USER instruction. OpenShift will reject pods whose containers run as UID 0 unless a special SCC is granted.

Persisting Data — Volumes & Bind Mounts

Containers are ephemeral; data written inside is lost on removal. There are two main mechanisms to persist data outside the container lifecycle.

Podman Named Volumes

Named volumes are managed by Podman, stored under ~/.local/share/containers/storage/volumes/ (rootless) or /var/lib/containers/storage/volumes/ (root). They survive container removal.

named volumes
# Create a named volume $ podman volume create mydata # List volumes $ podman volume ls # Inspect volume (shows mountpoint on host) $ podman volume inspect mydata # Mount volume into a container at /var/lib/myapp $ podman run -d -v mydata:/var/lib/myapp:Z myapp:latest # Remove a volume (fails if in use) $ podman volume rm mydata # Remove all unused volumes $ podman volume prune

Bind Mounts

Bind mounts map a host directory into the container. Useful during development to share source code.

bind mounts
# Mount host directory /host/data into /app/data inside container $ podman run -v /host/data:/app/data:Z myapp:latest # Read-only bind mount $ podman run -v /host/config:/app/config:ro,Z myapp:latest # :Z label tells Podman to relabel the files for SELinux (required on RHEL/CentOS) # :z shares the label between containers (less restrictive)

Running a Database Container with Persistent Storage

postgresql with volume
$ podman volume create pgdata $ podman run -d --name postgres \ -e POSTGRESQL_ADMIN_PASSWORD=redhat \ -e POSTGRESQL_DATABASE=mydb \ -e POSTGRESQL_USER=myuser \ -e POSTGRESQL_PASSWORD=mypass \ -v pgdata:/var/lib/pgsql/data:Z \ -p 5432:5432 \ registry.redhat.io/rhel8/postgresql-13:latest # Connect to the running database $ podman exec -it postgres psql -U myuser -d mydb

Container Networking

Podman uses a software-defined network layer (CNI or Netavark) to connect containers. By default, each container gets a private IP address on the podman bridge network. DNS-based name resolution works between containers on the same user-defined network.

Podman Network Commands

podman network
# Create a custom bridge network $ podman network create example-net # Create a network with specific subnet $ podman network create --subnet 192.168.100.0/24 my-subnet # List all networks $ podman network ls # Inspect a network (shows subnet, gateway, connected containers) $ podman network inspect example-net # Remove a network (must have no connected containers) $ podman network rm example-net # Remove all unused networks $ podman network prune # Connect a new container to a network $ podman run -d --name my-container --net example-net container-image:latest # Connect a container to multiple networks at launch $ podman run -d --name gateway --net frontend-net,backend-net my-gateway # Connect an already running container to a network $ podman network connect example-net my-container
DNS Resolution

Containers on the same user-defined network can reach each other by container name. For example, a web app on app-net can reach a database named db at db:5432. The default podman network does not provide DNS.

Troubleshooting Containers

Log Access and Debugging

debugging commands
# Stream container logs in real time $ podman logs -f myapp # Show last 100 lines of logs $ podman logs --tail 100 myapp # Show logs with timestamps $ podman logs -t myapp # Run a debug shell in a running container $ podman exec -it myapp /bin/sh # Run a debug shell in a new temporary container from same image $ podman run --rm -it myapp:latest /bin/sh # View running processes inside a container $ podman top myapp # Live resource usage stats $ podman stats myapp # Check container exit code after failure $ podman inspect myapp -f '{{.State.ExitCode}}'

Multi-Container Applications with Podman Compose

Podman Compose reads a docker-compose.yml / compose.yaml file and translates each service into Podman containers, networks, and volumes. It is ideal for local development environments.

Sample Compose File

compose.yaml — web + api + database
version: "3.8" services: frontend: image: registry.example.com/myapp/frontend:latest ports: - "8080:8080" networks: - app-net depends_on: - backend backend: image: registry.example.com/myapp/backend:latest environment: DB_HOST: db DB_PORT: "5432" networks: - app-net - db-net depends_on: - db db: image: registry.redhat.io/rhel8/postgresql-13:latest environment: POSTGRESQL_ADMIN_PASSWORD: redhat POSTGRESQL_DATABASE: mydb volumes: - pgdata:/var/lib/pgsql/data networks: - db-net volumes: pgdata: networks: app-net: db-net:

Podman Compose Commands

podman-compose
# Start all services (detached) $ podman-compose up -d # Start and rebuild images if changed $ podman-compose up -d --build # View running compose services $ podman-compose ps # Stream logs from all services $ podman-compose logs -f # Stop and remove all containers/networks created by compose $ podman-compose down # Scale a service to N replicas $ podman-compose up --scale backend=3 -d # Also filter logs by project label $ podman ps -a --filter label=io.podman.compose.project=myproject

Kubernetes Architecture

Kubernetes is an open-source container orchestration system. It groups containers into Pods, ensures desired state, scales workloads, and manages networking and storage across a cluster of nodes.

🖥️

Control Plane

Runs the API server (kube-apiserver), scheduler, controller manager, and etcd. Manages cluster state and makes scheduling decisions.

⚙️

Worker Nodes

Run the kubelet (node agent), kube-proxy, and a container runtime (CRI-O). Execute the actual workloads.

📦

Pod

The smallest deployable unit — a group of one or more tightly coupled containers sharing a network namespace (same IP) and storage volumes.

🔁

ReplicaSet

Ensures a specified number of Pod replicas are always running. Replaces failed pods automatically.

🚀

Deployment

Manages ReplicaSets declaratively. Enables rolling updates, rollbacks, and scaling. The standard way to deploy stateless apps.

🔌

Service

A stable virtual IP and DNS name that load-balances traffic to a set of matching Pods. Types: ClusterIP, NodePort, LoadBalancer.

🗃️

etcd

Distributed key-value store that holds all cluster state (resource definitions, secrets, configuration). The single source of truth.

📋

Namespace

Virtual cluster within Kubernetes for multi-tenancy. Resources in different namespaces are isolated, and resource quotas can be applied per namespace.

kubectl — Kubernetes CLI

kubectl essentials
# Imperative deployment (quick, not reproducible) $ kubectl create deployment db-pod --port 3306 \ --image registry.example.com/rhel8/mysql-80 # Set environment variables on a deployment $ kubectl set env deployment/db-pod \ MYSQL_USER='user1' \ MYSQL_PASSWORD='mypass' \ MYSQL_DATABASE='mydb' # Apply a manifest declaratively (creates or updates) $ kubectl apply -f deployment.yaml # Apply all manifests in a directory recursively $ kubectl apply -f manifests/ -R # Preview what apply would change (dry-run diff) $ kubectl diff -f deployment.yaml # Delete resources from a manifest $ kubectl delete -f deployment.yaml # Generate YAML manifest from an imperative command $ kubectl create deployment hello -o yaml --dry-run=client \ --image registry.example.com/redhattraining/hello:latest > hello.yaml # Explain any field of a resource kind $ kubectl explain deployment.spec.template.spec.containers

Red Hat OpenShift Container Platform

Red Hat OpenShift (RHOCP) is an enterprise Kubernetes distribution that adds developer tools, security hardening, integrated CI/CD (Tekton/Pipelines), a web console, and enterprise support on top of upstream Kubernetes.

Foundation
Linux Kernel
Runtime
CRI-O
Orchestration
Kubernetes
Platform
OpenShift
Your App
Pods & Services

Logging In and Basic Navigation

oc login & navigation
# Log in to an OpenShift cluster $ oc login -u developer -p developer https://api.ocp4.example.com:6443 # Print current user $ oc whoami # Get web console URL $ oc whoami --show-console # Get API server token (used for registry auth) $ oc whoami -t # Switch to a project (namespace) $ oc project myproject # List all accessible projects $ oc projects # Create a new project $ oc new-project myapp --description "My Application"

Key OpenShift-Specific Concepts

🗂️

Project

OpenShift's enhanced Namespace. Adds access control, network policies, and resource quota isolation per team or application.

🛣️

Route

OpenShift extension to expose services to external traffic via a hostname. Backed by HAProxy; supports TLS termination (edge, passthrough, re-encrypt).

🔒

Security Context Constraint (SCC)

Cluster-level policy controlling what a pod can do (run as root, mount host paths, use host network, etc.). Default SCC prevents privileged operations.

🖼️

ImageStream

A pointer to container images that triggers automatic redeployments when the referenced image changes. Decouples image location from deployment config.

🏗️

BuildConfig

Defines how to build a container image from source — supports Docker, Source-to-Image (S2I), and custom build strategies.

⚙️

Operator

A Kubernetes controller that encodes operational knowledge for managing a complex application (e.g., database clusters). Extends the Kubernetes API with CRDs.

Managing Workloads in OpenShift

Deploying Applications with oc new-app

oc new-app
# Deploy from a container image $ oc new-app --image registry.access.redhat.com/ubi8/httpd-24 # Deploy from a Git repository (auto-detects language via S2I) $ oc new-app https://github.com/myorg/myapp # Specify S2I builder + source repository $ oc new-app php~http://gitserver.example.com/mygitrepo # Deploy from local source directory $ oc new-app . --name myapp # Pass environment variables $ oc new-app --image rhel8/mysql-80 \ -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=mydb

Viewing and Managing Resources

oc get / describe / delete
# List all resources in current project $ oc get all # List pods with status $ oc get pods # List pods with node and IP info $ oc get pods -o wide # Watch pods until ready $ watch oc get pods # Describe a pod (events, conditions, container states) $ oc describe pod myapp-6d8c4-xyz # Get pod logs $ oc logs myapp-6d8c4-xyz $ oc logs -f myapp-6d8c4-xyz # follow $ oc logs -c sidecar myapp-6d8c4-xyz # specific container # Get deployments $ oc get deployments # Scale a deployment to 3 replicas $ oc scale deployment myapp --replicas 3 # Rollout status $ oc rollout status deployment/myapp # Roll back a deployment $ oc rollout undo deployment/myapp # Delete a resource $ oc delete pod myapp-6d8c4-xyz $ oc delete deployment myapp

Exposing Services via Routes

services & routes
# Expose a deployment as a ClusterIP service on port 8080 $ oc expose deployment myapp --port 8080 # Create a Route (external hostname) from a service $ oc expose service myapp # Get routes to see assigned hostnames $ oc get routes # Create a TLS edge-terminated route with a custom hostname $ oc create route edge --service myapp \ --hostname myapp.apps.ocp4.example.com \ --cert tls.crt --key tls.key

StatefulSets — Stateful Workloads

For stateful applications (databases, message queues), use a StatefulSet. Pods get stable network identifiers (mydb-0, mydb-1) and each gets its own PersistentVolumeClaim.

statefulsets
# List StatefulSets $ oc get statefulsets # Scale a StatefulSet (pods are added/removed in order) $ oc scale statefulset mydb --replicas 3 # Delete a StatefulSet without deleting its pods $ oc delete statefulset mydb --cascade=orphan

Builds — Source-to-Image (S2I)

Source-to-Image (S2I) is an OpenShift build strategy that takes application source code and a builder image, injects the source, runs the build scripts, and produces a ready-to-run container image — without writing a Containerfile.

Input
Source Code (Git)
+
Input
S2I Builder Image
S2I Process
assemble script
Output
App Container Image
Deploy
Pods on Cluster

S2I Build Commands

s2i & oc start-build
# Build locally with s2i CLI (for testing the S2I process) $ s2i build https://github.com/myorg/myapp \ registry.access.redhat.com/ubi8/python-38 \ myorg/myapp:latest # Trigger a new build in OpenShift $ oc start-build myapp-build # Trigger a build from local source (binary input) $ oc start-build myapp-build --from-dir . # Watch build logs in real time $ oc logs -f bc/myapp-build # List all builds $ oc get builds # Cancel a running build $ oc cancel-build myapp-build-3

BuildConfig YAML Example

buildconfig.yaml
apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: myapp spec: source: type: Git git: uri: https://github.com/myorg/myapp ref: main strategy: type: Source # S2I strategy sourceStrategy: from: kind: ImageStreamTag name: python:3.9-ubi8 namespace: openshift output: to: kind: ImageStreamTag name: myapp:latest triggers: - type: GitHub # Rebuild on push github: secret: my-github-secret - type: ImageChange # Rebuild when builder image updates

ImageStreams

imagestream commands
# List imagestreams in current project $ oc get imagestreams # List imagestream tags $ oc get imagestreamtags # Import an external image into an imagestream $ oc import-image myapp:latest \ --from quay.io/myorg/myapp:latest \ --confirm # Tag an existing imagestream tag $ oc tag myapp:latest myapp:stable

Injecting Configuration — ConfigMaps & Secrets

Applications should not bake configuration into images. ConfigMaps hold non-sensitive configuration; Secrets hold sensitive data (passwords, tokens, keys) as base64-encoded values.

ConfigMaps

configmap
# Create from literal values $ oc create configmap app-config \ --from-literal APP_PORT=8080 \ --from-literal LOG_LEVEL=info # Create from a file $ oc create configmap nginx-conf --from-file nginx.conf # Create from all files in a directory $ oc create configmap app-props --from-file ./config/ # View the configmap $ oc get configmap app-config -o yaml # Use as environment variables in a pod spec # spec.containers[].envFrom: # - configMapRef: # name: app-config # Mount as a volume (files in the pod) # spec.volumes[]: configMap: name: nginx-conf # spec.containers[].volumeMounts[]: mountPath: /etc/nginx/conf.d

Secrets

secrets
# Create a generic secret from literals $ oc create secret generic db-credentials \ --from-literal username=myuser \ --from-literal password='S3cr3t!' # Create a TLS secret from certificate files $ oc create secret tls my-tls --cert tls.crt --key tls.key # Create a docker-registry secret (for pulling private images) $ oc create secret docker-registry quay-pull-secret \ --docker-server quay.io \ --docker-username myuser \ --docker-password mytoken # Link pull secret to service account $ oc secrets link default quay-pull-secret --for pull # View a secret (base64-encoded) $ oc get secret db-credentials -o yaml # Decode a secret value $ oc get secret db-credentials -o jsonpath='{.data.password}' | base64 -d

Persistent Storage in OpenShift

Kubernetes storage model decouples how storage is provisioned (PersistentVolume) from how it is requested (PersistentVolumeClaim).

Storage Object Hierarchy

StorageClass (admin creates once) ↓ dynamically provisions PersistentVolume (PV) — actual storage backing (NFS, iSCSI, AWS EBS, Ceph, etc.) ↓ bound to PersistentVolumeClaim (PVC) — developer's storage request (size, access mode) ↓ mounted by Pod — uses the PVC as a volume

Access Modes

ModeShort FormMeaning
ReadWriteOnceRWOOne node can read and write. Suitable for block storage (AWS EBS, iSCSI).
ReadOnlyManyROXMany nodes can read. Suitable for shared read-only configuration.
ReadWriteManyRWXMany nodes can read and write. Requires distributed storage (NFS, CephFS, GlusterFS).
ReadWriteOncePodRWOPOnly a single Pod can access at a time (K8s 1.22+). Strongest isolation.

Working with PVCs

persistent volume claims
# List storage classes available in the cluster $ oc get storageclasses # List PersistentVolumes (admin view) $ oc get pv # List PersistentVolumeClaims in current project $ oc get pvc # Describe a PVC (status, bound PV, capacity) $ oc describe pvc mydata-pvc
pvc.yaml — request 5Gi of ReadWriteOnce storage
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mydata-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: standard # omit to use default StorageClass

Application Reliability — Health Probes & Autoscaling

Health Probe Types

🚦

Liveness Probe

Checks if the application is still running. If it fails, Kubernetes restarts the container. Detects deadlocks and infinite loops.

Readiness Probe

Checks if the application is ready to serve traffic. A failing readiness probe removes the pod from Service endpoints, preventing traffic to a still-starting app.

🔥

Startup Probe

Gates the liveness and readiness probes until the application has started. Critical for slow-starting applications to prevent premature restarts.

deployment.yaml — health probes + resource limits
spec: containers: - name: myapp image: myapp:latest resources: requests: cpu: "100m" # minimum guaranteed CPU (1000m = 1 core) memory: "256Mi" # minimum guaranteed memory limits: cpu: "500m" # maximum allowed CPU memory: "512Mi" # maximum allowed memory readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 10 failureThreshold: 3 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 15 periodSeconds: 20

Horizontal Pod Autoscaler (HPA)

autoscaling
# Create an HPA targeting 70% CPU utilization, scaling from 2 to 10 pods $ oc autoscale deployment myapp \ --min 2 --max 10 --cpu-percent 70 # View HPA status (shows current / desired replicas) $ oc get hpa # Describe HPA events and scaling history $ oc describe hpa myapp

Resource Quotas & LimitRanges

quota & limits
# View resource quotas for the project $ oc get resourcequota # Describe quota usage (current vs. limit) $ oc describe resourcequota compute-resources # View LimitRange defaults applied to new pods $ oc get limitrange $ oc describe limitrange core-resource-limits

Authentication, Authorization & RBAC

OpenShift supports multiple identity providers. The most common lab/on-premises choice is HTPasswd. Role-Based Access Control (RBAC) governs what authenticated users can do.

HTPasswd Identity Provider

htpasswd identity provider setup
# Create an htpasswd file with a new user $ htpasswd -c -B htpasswd-users user1 # Add another user to the existing file $ htpasswd -B htpasswd-users user2 # Create a secret from the htpasswd file $ oc create secret generic htpasswd-secret \ --from-file htpasswd=htpasswd-users \ -n openshift-config # Update the OAuth cluster resource to use HTPasswd $ oc edit oauth cluster # Add under spec.identityProviders: # - name: htpasswd-provider # mappingMethod: claim # type: HTPasswd # htpasswd: # fileData: # name: htpasswd-secret # Verify OAuth pods are rolling out $ oc get pods -n openshift-authentication

RBAC — Roles and RoleBindings

ResourceScopePurpose
RoleNamespaceDefines a set of allowed API verbs (get, list, watch, create, update, delete) on specific resources within a namespace.
RoleBindingNamespaceBinds a Role (or ClusterRole) to users/groups/service accounts within a namespace.
ClusterRoleCluster-wideLike Role but applies across all namespaces. Also used for non-namespaced resources (nodes, PVs).
ClusterRoleBindingCluster-wideBinds a ClusterRole to subjects for cluster-wide access.
rbac commands
# List roles in current project $ oc get roles # List cluster roles $ oc get clusterroles # Grant user the edit role in current project $ oc adm policy add-role-to-user edit user1 # Grant user the view role in a specific namespace $ oc adm policy add-role-to-user view user2 -n production # Grant cluster-admin rights (use with caution!) $ oc adm policy add-cluster-role-to-user cluster-admin admin-user # Remove a role from a user $ oc adm policy remove-role-from-user edit user1 # Check what a user can do $ oc auth can-i create pods --as user1 $ oc auth can-i '*' '*' --as system:serviceaccount:myproject:default # Grant a service account the anyuid SCC (allows running as any user) $ oc adm policy add-scc-to-user anyuid -z myserviceaccount

Groups

groups
# Create a group $ oc adm groups new developers # Add users to a group $ oc adm groups add-users developers user1 user2 # Grant the group edit role in a namespace $ oc adm policy add-role-to-group edit developers -n myproject # List groups $ oc get groups

Declarative Resource Management & Kustomize

The declarative workflow describes desired state in YAML manifests and uses kubectl apply to reconcile the cluster to that state. This is reproducible, auditable (Git-based), and supports GitOps workflows.

Imperative vs. Declarative

ImperativeDeclarative
Howkubectl create / delete commandsYAML files + kubectl apply
ReproducibilityDifficult — depends on command historyHigh — files define the exact desired state
GitOps compatibleNoYes
Best forQuick one-off tasks, debuggingProduction deployments, CI/CD pipelines

Kustomize — Configuration Overlays

Kustomize generates Kubernetes manifests from a base and environment-specific overlays without duplicating YAML. It is natively integrated into kubectl and oc.

my-app/ ├── base/ ← shared configuration for all environments │ ├── kustomization.yaml │ ├── deployment.yaml │ └── service.yaml └── overlays/ ├── staging/ │ ├── kustomization.yaml ← bases: [../../base]; namespace: myapp-stage │ └── patch-replicas.yaml ← overrides replicas: 1 └── production/ ├── kustomization.yaml ← bases: [../../base]; commonLabels: {env: prod} └── patch-replicas.yaml ← overrides replicas: 5
kustomize commands
# Preview rendered manifests without applying $ kubectl kustomize overlay/production # Apply a kustomization overlay to the cluster $ kubectl apply -k overlay/production $ oc apply -k overlay/staging # Delete resources created by a kustomization $ oc delete -k overlay/production # Diff current cluster state vs. kustomization $ kubectl diff -k overlay/production

OpenShift Templates

Templates are OpenShift-native packaged resource sets with parameters. They are stored in the openshift namespace and deployable via the web console or CLI.

templates
# List available templates in the global template library $ oc get templates -n openshift # Describe a template (parameters, resources it creates) $ oc describe template cache-service -n openshift # Process a template with custom parameters $ oc process -f mytemplate.yaml \ -p APP_NAME=myapp \ -p REPLICAS=3 | oc apply -f - # Process a template from the openshift namespace $ oc process -n openshift cakephp-mysql-persistent \ -p NAME=myweb | oc apply -f - # Export an existing resource as a template $ oc get all -o yaml | oc export -f - > exported-template.yaml

Operators & Helm Charts

Kubernetes Operators

An Operator is a Kubernetes controller that manages a specific application using domain knowledge encoded in software. It watches Custom Resources (CRs) and reconciles the cluster to match the desired state defined in those CRs.

🔩

Custom Resource Definition (CRD)

Extends the Kubernetes API with new resource types. An Operator registers CRDs and watches for instances to reconcile.

📡

Operator Lifecycle Manager (OLM)

OpenShift's system for installing, updating, and managing Operators cluster-wide. Provides the OperatorHub web UI.

📦

ClusterServiceVersion (CSV)

Operator metadata file describing capabilities, required CRDs, install strategy, and lifecycle details. OLM uses the CSV to install an Operator.

📋

Subscription

Tells OLM which Operator to install, from which catalog, and update channel. OLM keeps the Operator updated as new versions are published.

operator management
# List all installed operators (CSVs) in a namespace $ oc get clusterserviceversions # List operator subscriptions $ oc get subscriptions -n openshift-operators # Describe a subscription (current/desired CSV, catalog) $ oc describe subscription my-operator -n openshift-operators # List Custom Resource Definitions $ oc get crds # List resources created by an operator (e.g., a database cluster CR) $ oc get myoperatorresource

Helm — Package Management for Kubernetes

Helm packages Kubernetes manifests into charts (versioned, shareable archives). It uses Go templates to parameterize manifests, and tracks installed releases.

helm
# Add a chart repository $ helm repo add bitnami https://charts.bitnami.com/bitnami # Update local repo cache $ helm repo update # Search for a chart $ helm search repo nginx # Install a chart with a custom release name $ helm install my-nginx bitnami/nginx # Install with custom values $ helm install my-nginx bitnami/nginx \ --set replicaCount=2 \ --set service.type=ClusterIP # Install with a values file $ helm install my-nginx bitnami/nginx -f values.yaml # List all Helm releases in the current namespace $ helm list # Upgrade a release (apply changed values) $ helm upgrade my-nginx bitnami/nginx --set replicaCount=4 # Roll back to a previous release revision $ helm rollback my-nginx 1 # Uninstall a release (removes all its resources) $ helm uninstall my-nginx # Show rendered templates without installing (debug) $ helm template my-nginx bitnami/nginx -f values.yaml

Network Policies & TLS

NetworkPolicy resources enforce firewall-like rules between pods and namespaces. By default in OpenShift, all pods in a project can communicate with each other. NetworkPolicies restrict this.

Default Deny + Allow Pattern

network-policy — deny-all ingress, then allow specific traffic
# 1. Deny all ingress to this namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-ingress spec: podSelector: {} # applies to ALL pods in namespace policyTypes: - Ingress --- # 2. Allow ingress only from pods with label app=frontend apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend-to-backend spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080
network policy & TLS
# List network policies in current project $ oc get networkpolicies # Describe a network policy $ oc describe networkpolicy deny-all-ingress # Apply a network policy $ oc apply -f deny-all.yaml # Generate a self-signed TLS certificate (for testing) $ openssl req -newkey rsa:4096 -nodes -keyout tls.key \ -x509 -days 365 -out tls.crt \ -subj "/CN=myapp.apps.ocp4.example.com" # Create a TLS edge-terminated route $ oc create route edge myapp-tls \ --service myapp \ --cert tls.crt \ --key tls.key \ --hostname myapp.apps.ocp4.example.com # Create a passthrough route (TLS terminated at the pod) $ oc create route passthrough myapp-passthrough \ --service myapp \ --hostname myapp.apps.ocp4.example.com

Egress Controls

egress network policy
# Allow egress only to a specific external CIDR # (EgressNetworkPolicy is an OpenShift extension) $ oc apply -f egress-policy.yaml # Test connectivity from a pod $ oc exec -it myapp-pod -- curl -v http://external-service:8080

Cluster & Operator Updates

OpenShift uses the Cluster Version Operator (CVO) to manage cluster updates. Updates follow a graph of supported upgrade paths and can be performed with minimal downtime using rolling node replacements.

Update Channels

ChannelPurpose
stable-4.xFully tested, production-recommended updates for minor version 4.x
fast-4.xUpdates released faster than stable; less bake time but still tested
candidate-4.xRelease candidates; not for production
eus-4.xExtended Update Support — for customers on a fixed minor version
cluster update commands
# Check current cluster version and available updates $ oc get clusterversion $ oc describe clusterversion version # View available update targets $ oc adm upgrade # Start a cluster upgrade to a specific version $ oc adm upgrade --to 4.14.15 # Trigger an upgrade to the latest recommended version $ oc adm upgrade --to-latest # Watch cluster operators during upgrade $ watch oc get clusteroperators # View cluster operator health $ oc get clusteroperators # Check node status during rolling update $ oc get nodes # View machine config pools (controls node update batching) $ oc get machineconfigpool # Pause updates to the worker pool (e.g., during critical period) $ oc patch mcp/worker --type merge \ -p '{"spec":{"paused":true}}' # Resume worker pool updates $ oc patch mcp/worker --type merge \ -p '{"spec":{"paused":false}}'

Updating Operators via OLM

operator updates
# List operator subscriptions and their update channels $ oc get subscriptions -A # Check install plans (pending/approved operator updates) $ oc get installplans -n openshift-operators # Approve a manual install plan (for manually-approved update policy) $ oc patch installplan install-xxxxx --type merge \ -p '{"spec":{"approved":true}}' \ -n openshift-operators # Watch operator CSV rollout $ watch oc get csv -n openshift-operators
Important

OpenShift supports updates only between adjacent minor versions (e.g., 4.12 → 4.13 → 4.14). Skipping minor versions requires following the official upgrade graph from Red Hat's upgrade graph tool. Always update the cluster before updating Operators that depend on it.

Quick Reference — Most-Used Commands

Podman Cheat Sheet

GoalCommand
Run a containerpodman run -d -p 8080:8080 --name myapp image:tag
List runningpodman ps
List all (including stopped)podman ps -a
Shell into running containerpodman exec -it myapp /bin/sh
View logspodman logs -f myapp
Stop / remove containerpodman stop myapp && podman rm myapp
Build imagepodman build -t myapp:latest .
Push imagepodman push quay.io/myorg/myapp:latest
List local imagespodman images
Remove imagepodman rmi myapp:latest
Multi-container app uppodman-compose up -d
Multi-container app downpodman-compose down

OpenShift / oc Cheat Sheet

GoalCommand
Log inoc login -u user -p pass https://api.cluster:6443
Switch projectoc project myproject
Create projectoc new-project myapp
Deploy from imageoc new-app --image registry.io/img:tag
List everythingoc get all
Watch podswatch oc get pods
Shell into podoc rsh pod-name
Pod logsoc logs -f pod-name
Scale deploymentoc scale deployment myapp --replicas 3
Expose service as routeoc expose service myapp
Apply manifestoc apply -f manifest.yaml
Apply kustomize overlayoc apply -k overlays/production
Grant edit roleoc adm policy add-role-to-user edit user1
Check cluster versionoc get clusterversion
Start cluster upgradeoc adm upgrade --to 4.14.15