Setup Cluster using UPI (User-Provisioned Infrastructure) give you the Maximum control. You manually prepare the VMs, load balancers, and DNS.
Pre-requisites for Air-Gapped Cluster
The primary challenge in a disconnected environment is the absence of the Red Hat Container Registry. You must bridge this gap before initiating the installation.
Mirror Registry: Establish a local container registry (e.g., Red Hat Quay or JFrog Artifactory) within your secure perimeter.
Content Mirroring: Use the
oc mirrorplugin to sync OpenShift release images, operator catalogs, and helm charts from the internet to a portable medium, then into your local registry.Internal DNS & NTP: Precise time synchronization and split-horizon DNS are non-negotiable. Every node must resolve the local registry and the internal API endpoints.
- Control Plane Nodes (Masters) Must use Red Hat Enterprise Linux CoreOS (RHCOS), You cannot use standard RHEL, Ubuntu, or any other OS for your Master nodes. Why? Master nodes are managed by the Machine Config Operator (MCO). The MCO expects an immutable, container-optimized OS (RHCOS) so it can push updates, roll back kernel changes, and manage configurations automatically.
- Compute Nodes (Workers): it is recommneded to use RHCOS. So, When you update OpenShift, the OS on the workers updates automatically. It uses
rpm-ostreefor transactional, safe updates.
Core Deployment Workflow
Phase I: Configuration & Manifest Generation
The process begins on a secure "Bastion" host. You define the cluster's DNA in the install-config.yaml, explicitly pointing to your local mirror registry and providing the internal CA certificates.
Generate Manifests: Create the Kubernetes manifest files to allow for manual adjustment of network or proxy settings.
Generate Ignition Configs: Convert manifests into
.ignfiles. These are the "instruction sets" that Red Hat Enterprise Linux CoreOS (RHCOS) executes during its first boot.
Phase II: Infrastructure Provisioning
With the Ignition files ready, you must prepare the environment "shell":
Load Balancing: Configure a high-availability load balancer (e.g., F5, HAProxy) to handle traffic for the API (Port 6443), Machine Config Server (Port 22623), and Ingress (Ports 80/443).
Static Asset Hosting: Place the generated
.ignfiles on an internal HTTP server accessible by the nodes during PXE or ISO boot.
Phase III: The Bootstrap Sequence
Bootstrap Node: Power on the bootstrap VM. It pulls its configuration from your internal HTTP server and initiates the control plane creation.
Control Plane (Masters): Boot the three Master nodes. They communicate with the bootstrap node to form the Etcd quorum.
Compute Nodes (Workers): Once the control plane is healthy, boot the worker nodes. In a UPI context, you must manually approve the Certificate Signing Requests (CSRs) to allow workers to join the cluster.
This diagram illustrates a standard OpenShift User-Provisioned Infrastructure (UPI) deployment within a disconnected or managed local network. It utilizes a "Helper Node" to manage the cluster's lifecycle and infrastructure services.
The Helper Node
The Helper Node is the backbone of this setup. It acts as a bridge between the two networks with two network interfaces:
Interface 1 (ens192): External zone (192.168.0.X).
Interface 2 (ens224): Internal zone (192.168.22.1).
It provides critical infrastructure services that OpenShift requires to function:
DNS (BIND): Resolves cluster hostnames and API endpoints.
DHCP: Assigns static IPs to the cluster nodes.
NAT Gateway: Allows internal nodes to access the internet through the Helper Node.
Load Balancer (HAProxy): Manages traffic to the Control Plane API and the Ingress router.
Web/File Server (Apache): Hosts the "Ignition" files used during the automated installation.
Cluster Node Roles
The cluster is composed of six virtual machines, each with specific resource allocations:
The Bootstrap Node
Purpose: A temporary node used only during the initial installation. It orchestrates the creation of the Control Plane.
Once the Control Plane is healthy, this node is usually destroyed or shut down.
The Control Plane
Purpose: The "brains" of the cluster. These three nodes run the API server, etcd (the database), and controllers.
High Availability: Having three nodes ensures the cluster remains operational even if one node fails.
The Worker Nodes
Purpose: This is where your actual applications, containers, and pods run.
Phase 1: Environment Preparation
Before starting the installation, you must set up a helper node. This node acts as the orchestration hub and provides critical network services to the cluster.
OS Installation: Install a stable Linux distribution (e.g., CentOS 8 or RHEL 8) on the helper node.
Install Essential Tools:
OpenShift CLI (
oc): Used to interact with the cluster.Kubernetes CLI (
kubectl): For lower-level resource management.OpenShift Installer (
openshift-install): The binary that generates ignition files.
Network Services:
Web Server (Apache/Nginx): To host the Ignition files and RHCOS images that the bare metal nodes will pull during boot.
DNS: Ensure you have configured DNS records for:
api.<cluster_name>.<domain>api-int.<cluster_name>.<domain>*.apps.<cluster_name>.<domain>
Load Balancer: Configure a load balancer (like HAProxy) to route traffic to the control plane and worker nodes.
Phase 2: Generating Installation Files
The installation is driven by "Ignition" files, which tell the CoreOS nodes how to configure themselves.
Create Installation Directory:
cp install-config.yaml ~/ocp-install/Customize
install-config.yaml: * Insert your Pull Secret from Red Hat.Add your SSH Public Key for node access.
Define the networking CIDRs and cluster name.
Generate Manifests & Ignition Files:
./openshift-install create manifests --dir ~/ocp-install # Generate the ignition configs ./openshift-install create ignition-configs --dir ~/ocp-installHost the Files: Move the generated
.ignfiles and the RHCOS metal image to your web server directory (e.g.,/var/www/html/ocp4/).
Phase 3: Bootstrapping the Cluster
With the services ready, you can begin booting the physical hardware.
Boot the Bootstrap Node: Boot this node using the RHCOS ISO or PXE. Pass the URL of the
bootstrap.ignfile via the kernel arguments.Boot the Control Plane (Masters): Boot the three master nodes, pointing them to the
master.ignfile URL.Monitor the Bootstrap Process:
Once this completes, you can safely decommission the Bootstrap node.
Phase 4: Finalizing the Cluster
After the control plane is up, the remaining nodes must be joined and certificates approved.
Join Worker Nodes: Boot the worker nodes using the
worker.ignfile URL.Approve CSRs (Certificate Signing Requests): Workers will not join until their certificates are approved.
oc get csr # Approve all pending requests oc get csr -o name | xargs oc adm certificate approveVerify Cluster Health:
oc get co # Check Cluster Operators status
Phase 5: Post-Installation
Access the Console: Retrieve the
kubeadminpassword and the Web Console URL from theauthdirectory:URL:
cat ~/ocp-install/auth/console-urlLogin:
cat ~/ocp-install/auth/kubeadmin-password
Configure Storage: Define your StorageClasses (e.g., NFS, OCS, or local storage) to allow applications to persist data.
Set Up Identity Providers: Transition from the temporary
kubeadminuser to a permanent solution like LDAP or OAuth.

No comments:
Post a Comment