Configure Persistent Image Registry in Openshift using NFS

In this article, we will see how to configure persistent image registry in Openshift by using NFS with Persistent Volume (PV) and Persistent Volume claim (PVC) resources.

By default Openshift installer configures default registry. Installer setup the volume for registry by exporting NFS volume from the master node. But this is not ideal for production setup. So we usually need to configure persistent storage for registry.

Verify that the OCP internal registry is running and includes a default PersistentVolumeClaim (PVC) named registry-claim.

Step 1: Login to master node with system user and select default project.

[root@master ~]# oc login -u system:admin

Logged into “https://master.lab.example.com:8443” as “system:admin” using existing credentials.

You have access to the following projects and can switch between them with ‘oc project <projectname>’:

* default

kube-system

logging

management-infra

openshift

openshift-infra

Using project “default”.

 

Step 2: Verify that the docker-registry pod is running and find the pod name

[root@master ~]# oc get pods

docker-registry-6-d21wk    1/1       Running   1          21h

registry-console-1-ph7zv   1/1       Running   1          21h

router-1-vi46b             1/1       Running   1          21h

Step 3: Verify the default persistent volume and persistent volume claim created by the installer

[root@master ~]# oc get pv; oc get pvc

NAME              CAPACITY   ACCESSMODES   ..   STATUS    CLAIM

registry-volume   5Gi        RWX           ..   Bound     default/registry-claim

 

NAME             STATUS    VOLUME            CAPACITY   ACCESSMODES   AGE

registry-claim   Bound     registry-volume   5Gi        RWX           13h

 

Step 4: Use the oc volume pod command to determine if the docker-registry pod identified in above step has a PVC defined as registry-claim

[root@master ~]# oc volume pod docker-registry-6-d21wk

pods/docker-registry-6-d21wk

pvc/registry-claim (allocated 5GiB) as registry-storage

mounted at /registry

secret/registry-certificates as volume-a579i

mounted at /etc/secrets

secret/registry-token-fnw7y as registry-token-fnw7y

mounted at /var/run/secrets/kubernetes.io/serviceaccount

 

Step 5: Find the registry DeploymentConfig name

[root@master ~]# oc status

In project default on server https://master.lab.example.com:8443

https://docker-registry-default.cloudapps.lab.example.com (passthrough) to pod port 5000-tcp (svc/docker-registry)

dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.4.0.39

deployment #6 deployed 13 hours ago – 1 pod

 

Step 6: Verify that the pod mounts the default PVC to /registry from the default registry DeploymentConfig

[root@master ~]# oc volume dc docker-registry

deploymentconfigs/docker-registry

pvc/registry-claim (allocated 5GiB) as registry-storage

mounted at /registry

secret/registry-certificates as volume-dad50

mounted at /etc/secrets

 

Step 7: Verify that the current registry DeploymentConfig shows volumes and volumeMounts attributes

[root@master ~]# oc get dc docker-registry -o json | less

“spec”: {

“volumes”: [

{

“name”: “registry-storage”,

“persistentVolumeClaim”: {

“claimName”: “registry-claim”

}

},

“volumeMounts”: [

{

“name”: “registry-storage”,

“mountPath”: “/registry”

},

Step 8: Create NFS share from master host and export it with nfsnobody user. The reason behind this is each container has random UID, in that case NFS share will not accessible inside pod.

[root@master ~]# mkdir -p /var/export/registryvol

[root@master ~]# chown nfsnobody:nfsnobody /var/export/registryvol

[root@master ~]# chmod 700 /var/export/registryvol

Export the folder

[root@master ~]# vi /etc/exports.d/training-registry.exports

/var/export/registryvol *(rw,async,all_squash)

Save and exit file.

[root@master ~]# exportfs –a

[root@master ~]# showmount –e

Export list for master.lab.example.com:

/var/export/registryvol *

 

Step 9: On master host create new Persistent Volume (PV) resource which will use NFS share from master host, following are resource definition of PV in json format.

[root@master ~]# vi training-registry-volume.json

{

“apiVersion”: “v1”,

“kind”: “PersistentVolume”,

“metadata”: {

“name”: “training-registry-volume”,

“labels”: {

“deploymentconfig”: “docker-registry”

}

},

“spec”: {

“capacity”: {

“storage”: “10Gi”

},

“accessModes”: [ “ReadWriteMany” ],

“nfs”: {

“path”: “/var/export/registryvol”,

“server”: “master.lab.example.com”

}

}

}

Step 10: Create PV using oc create command and check PV status.

[root@master ~]# oc create –f training-registry-volume.json

persistentvolume “training-registry-volume” created

[root@master ~]# oc get pv

NAME                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM

registry-volume            5Gi        RWX           Retain          Bound       default/registry-claim

training-registry-volume   10Gi       RWX           Retain          Available

Step 11: On master host create Persistent Volume Claim (PVC) Definition.

[root@master ~]#  vi /root/DO280/labs/deploy-registry/training-registry-pvclaim.json

{

“apiVersion”: “v1”,

“kind”: “PersistentVolumeClaim”,

“metadata”: {

“name”: “training-registry-pvclaim”,

“labels”: {

“deploymentconfig”: “docker-registry”

}

},

“spec”: {

“accessModes”: [ “ReadWriteMany” ],

“resources”: {

“requests”: {

“storage”: “10Gi”

}

}

}

}

Step 12: Create PVC using oc create command and check PVC status.

[root@master ~]# oc create -f training-registry-pvclaim.json

persistentvolumeclaim “training-registry-pvclaim” created

[root@master ~]# oc get pvc

NAME                        STATUS    VOLUME                     CAPACITY   ACCESSMODES   AGE

registry-claim              Bound     registry-volume            5Gi        RWX           17h

training-registry-pvclaim   Bound     training-registry-volume   10Gi       RWX           55s

 

Step 13: Attach PV to deployment config of docker registry with oc volume command as below.

[root@master ~]# oc volume dc docker-registry \

–add –overwrite -t pvc \

–claim-name=training-registry-pvclaim –name=registry-storage

deploymentconfig “docker-registry” updated

 

Note: where, –claim-name specifies the PVC name and –name specifies the pod volume name.

 

Step 14: Verify that the DeploymentConfig of docker-registry was changed to use the new PVC

[root@master ~]# oc get dc docker-registry -o json  | less

“spec”: {

“volumes”: [

{

“name”: “registry-storage”,

“persistentVolumeClaim”: {

“claimName”: “training-registry-pvclaim”

}

},

Step 15: Verify that the DeploymentConfig docker-registry started a new registry pod after detecting that the deployment configuration had been changed

[root@master ~]# watch oc status -v

In project default on server https://master.lab.example.com:8443

https://docker-registry-default.cloudapps.lab.example.com (passthrough) to pod port 5000-tcp (svc/docker-registry)

dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.4.0.39

deployment #7 deployed about a minute ago – 1 pod

deployment #6 deployed 17 hours ago

 

Step 16: Verify docker registry pod is running.

[root@master ~]# oc get pods

NAME                       READY     STATUS    RESTARTS   AGE

docker-registry-7-1gwd4    1/1       Running   0          9m

registry-console-1-zlrry   1/1       Running   2          17h

router-1-32toa             1/1       Running   2          17h

 

Finally we completed configuration of openshift image registry with persistent storage i.e using NFS.

Installation of Red Hat Openshift Platform

In this article we will see, how to install Openshift platform step by step on Red hat enterprise Linux 7. This installation includes three machines. In which one node work as master and another will host pods (collection of containers) and third node is workstation will host private image registry for openshift.

Master runs Openshift core services such as authentication, Kubernetes master services, Etcd daemons, Scheduler and Management/Replication while Node runs applications inside containers, which are in turn grouped into pods also it runs Kubernetes kubelet and kube-proxy daemons.

The Kubernetes scheduling unit is the pod, which is a grouping of containers sharing a virtual network device, internal IP address, TCP/UDP ports, and persistent storage. A pod can be anything from a complete enterprise application, including each of its layers as a distinct container, to a single microservice inside a single container. For example, a pod with one container running PHP under Apache and another container running MySQL.

Kubernetes also manage replica to scale pods. A replica is a set of pods sharing the same definition. For example, a replica consisting of many Apache+PHP pods running the same container image could be used for horizontally scaling a web application.

Following figure shows typical working of Openshift cloud platform.

openshift_working

Prior to installation make sure all systems are subscribed and connected to Red hat subscription management not to the RHN. Following subscriptions are required for Openshift installation.

OpenShift Container Platform subscriptions version 3.0 or 3.4, RHEL channel (rhel-7-server-rpms), rhel-7-server-extras-rpms required for docker installation, and rhel-7-server-optional-rpms.

To enable the required channels, use the command subscription-manager repos –enable.

Pr-requisite before installation:

  • Configure password less SSH between Master and Node.
  • Master and Node both must have static IP Address with resolvable DNS hostnames.
  • NetworkManager service must be enable and running on Master and Node.
  • Firewall service must be disable.
  • Configure wild card DNS zone. This needed by Openshift router (openshift router is basically pod which runs on node).

Installation procedure:

Master Server: master.test.example.com 172.25.0.10

Node Server: node.test.example.com 172.25.0.11

Workstation Server: workstation.test.example.com 172.25.0.9

Sub-domain Name: cloudapps.test.example.com

Step 1: Configure password less SSH between Master and Node server.

[root@master ~]# ssh-keygen -f /root/.ssh/id_rsa -N ”
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
F5:8e:39:3d:a6:64:66:c7:3c:03:cb:fd:48:7a:26:e9
root@master.test.example.com
The key’s randomart image is:
+–[ RSA 2048]—-+
|                 |
|                 |
|          .      |
|         . .     |
|        S . .    |
|         . @     |
|          @. &   |
|         =oBo*   |
|         .E+. .  |
+—————–+

Copy SSH key to Node Server as well as Master server itself, the reason is Openshift installer will copy installation files from Master server to Node server.

[root@master ~]# ssh-copy-id root@node.test.example.com

[root@master ~]# ssh-copy-id root@master.test.example.com

Step 2: Stop and Disable firewalld service.

[root@master ~]# systemctl stop firewalld

[root@master ~]# systemctl disable firewalld

[root@node ~]# systemctl stop firewalld

[root@node ~]# systemctl disable firewalld

Step 3: Copy SSL certificate from workstation to Master and Node server. (Pls. see post how to configure Private Image Registry on workstation….)

[root@master ~]# scp root@workstation:/etc/pki/tls/certs/example.com.crt \
/etc/pki/ca-trust/source/anchors/

Add certificate as from trusted source.
[root@master ~]# update-ca-trust extract

Repeat the same on Node server.

[root@node~]# scp root@workstation:/etc/pki/tls/certs/example.com.crt \
/etc/pki/ca-trust/source/anchors/

Add certificate as from trusted source.
[root@node~]# update-ca-trust extract

Step 4: Install Docker package and edit the docker configuration to setup internal private registry and block public docker registry.

[root@master ~]# yum install -y docker

[root@master ~]# /etc/sysconfig/docker

#ADD_REGISTRY=’–add-registry registry.access.redhat.com’
ADD_REGISTRY=’–add-registry workstation.test.example.com:5000′
BLOCK_REGISTRY=’–block-registry docker.io –block-registry registry.access.redhat.com’

save and exit file.

Repeat the same on Node server.

[root@node ~]# yum install -y docker

[root@node ~]# /etc/sysconfig/docker

#ADD_REGISTRY=’–add-registry registry.access.redhat.com’
ADD_REGISTRY=’–add-registry workstation.test.example.com:5000′
BLOCK_REGISTRY=’–block-registry docker.io –block-registry registry.access.redhat.com’

save and exit file..

Step 5: Setup storage for docker. create docker-storage-setup script inside /etc/sysconfig directory. specify device name, volume group name and enable LVM thin pool feature.

[root@master ~]#vi /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes

[root@master ~]# lvmconf –disable-cluster
[root@master ~]# docker-storage-setup

Repeat the same on Node server.

[root@node ~]#vi /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes

[root@node ~]# lvmconf –disable-cluster
[root@node ~]# docker-storage-setup

Examine newly created docker pool, this will host storage for docker container images.

[root@master ~]# lvs /dev/docker-vg/docker-pool
LV          VG        Attr       LSize Pool Origin Data%  Meta%    Move  Log  Cpy%Sync  Convert
docker-pool docker-vg twi-a-t— 10.45g            0.00   0.20

Start and enable docker service on both master and node server.

[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker

[root@node~]# systemctl start docker
[root@node~]# systemctl enable docker

Step 6: Install packages and images required by installer.

Following rpm package are required:

wget
git
net-tools
bind-utils
iptables-services
bridge-utils
atomic-openshift-docker-excluder
atomic-openshift-excluder
atomic-openshift-utils

Following container images are required:

openshift3/ose-haproxy-router
openshift3/ose-deployer
openshift3/ose-sti-builder
openshift3/ose-pod
openshift3/ose-docker-registry
openshift3/ose-docker-builder
openshift3/registry-console

Additionally following application images are required but are optional.

openshift3/ruby-20-rhel7
openshift3/mysql-55-rhel7
openshift3/php-55-rhel7
jboss-eap-6/eap64-openshift
openshift3/nodejs-010-rhel7

[root@master ~]# yum -y install atomic-openshift-docker-excluder \
atomic-openshift-excluder atomic-openshift-utils \
bind-utils bridge-utils git \
iptables-services net-tools wget

[root@node~]# yum -y install atomic-openshift-docker-excluder \
atomic-openshift-excluder atomic-openshift-utils \
bind-utils bridge-utils git \
iptables-services net-tools wget

Create following script to fetch images on both master and node server from workstation server.

[root@master~]# vi fetch.sh

#!/bin/bash

for image in \
openshift3/ose-haproxy-router openshift3/ose-deployer openshift3/ose-sti-builder \
openshift3/ose-pod openshift3/ose-docker-registry openshift3/ose-docker-builder \
openshift3/registry-console
do docker pull $image:v3.4.1.0; done

#runtime images
for image in \
openshift3/ruby-20-rhel7 openshift3/mysql-55-rhel7 openshift3/php-55-rhel7 \
jboss-eap-6/eap64-openshift  openshift3/nodejs-010-rhel7
do docker pull $image: done

#sample image
for image in \
openshift/hello-openshift php-quote
do docker pull $image; done

[root@master~]# bash fetch.sh

Check images using

[root@master ~]# docker images

copy script to node server

[root@master~]# scp fetch.sh root@node.test.example.com:/tmp/fetch.sh

[root@node~]# bash /tmp/fetch.sh

[root@node~]# docker images

Step 7: Run the installer.

Remove OpenShift package exclusions. When the atomic-openshift-excluder package was installed, it added an exclude line to the /etc/yum.conf file. The package exclusions need to be removed in order for the installation to succeed. Remove the package exclusions from the master and node hosts:

[root@master~]# atomic-openshift-excluder unexclude

[root@node ~]# atomic-openshift-excluder unexclude

Make copy of docker configuration file on both master and node.

[root@master ~]# cp /etc/sysconfig/docker /etc/sysconfig/docker-backup

[root@node~]# cp /etc/sysconfig/docker /etc/sysconfig/docker-backup

Now run Openshift installer on master server only.

[root@master ~]# atomic-openshift-installer install

The installer displays a list of pre-requisites and asks for confirmation to continue.

  • The installer asks the user to connect to remote hosts. Press Enter to continue.
  • The installers asks if you want to install OCP or a standalone registry. Press Enter to accept the default value of 1, which installs OCP.
  • The installer prompts for details about the master node. Enter master.test.example.com as the hostname of master, Enter y to confirm that this host will be the master, and press Enter to accept the default rpm option
  • You have added details for the OCP master. You also need to add an OCP node. Enter y in the Do you want to add additional hosts? prompt, enter node.test.example.com as the hostname of the node, Enter N to confirm that this host will not be the master, and press Enter to accept the default rpm option.
  • The OpenShift cluster will have only two hosts. Enter N at the Do you want to add additional hosts? prompt.
  • The installer asks if you want to override the cluster host name. Press Enter to accept the default value of None.
  • The installer prompts you for a host where the storage for the OCP registry will be configured. Press Enter to accept the default value of master.test.example.com.
  • Enter cloudapps.test.example.com as the DNS sub-domain for the OCP router.
  • Accept the default value of none for both the http and https proxy.
  • The installer prints a final summary based on your input and asks for confirmation. Ensure that the hostname and IP address details of master and node hosts are correct, and then enter y to continue.
  • Finally Enter y to start the installation.

The installation takes 15 to 20 minutes to complete depending on the CPU, memory and network capacity of servers. If installation is successful, you should see a “The installation was successful!” message at the end.

Verify node and pod status.

[root@master ~]# oc get nodes
NAME                    STATUS                   AGE
master.test.example.com  Ready,SchedulingDisabled 9m
node.test.example.com    Ready                    9m

Check the status of the pods that were created during the OCP installation:

[root@master ~]# oc get pods
NAME                        READY     STATUS              RESTARTS   AGE
docker-registry-6-deploy    0/1       ContainerCreating   0          12m
registry-console-1-deploy   0/1       ContainerCreating   0          11m
router-1-deploy             0/1       ContainerCreating   0          12m

 

Step 8: Configure Openshift router and registry.

By default openshift installer setup the router and registry automatically, The OpenShift router is the ingress point to all external traffic destined for applications inside the OCP cloud. It runs as a pod on schedulable nodes and may need some postinstallation adjustments for environments which don’t point to the Red Hat subscriber private registry.

Note: Openshift router run as a pod so ithas special constraint context security privilegde, so it can bind to TCP ports on the host itself. This provision already configured by installer. The default Router implementation provided by OCP is based on a container image running HAProxy.

When installing OCP in an offline environment, the base OCP platform docker images can be pulled from a private registry hosted on a server inside the network. If the docker configuration has been changed to point to the internal private docker registry, a bug in the OCP installer causes it to overwrite the registry location and point to the Red Hat subscribers registry at registry.access.redhat.com. This causes the router and docker-registry pods to fail to start after the OCP install process is complete.

To fix this issue, revert to an older version of the docker configuration file. (/etc/sysconfig/docker-backup).

[root@master ~]# cp /etc/sysconfig/docker-backup /etc/sysconfig/docker
cp: overwrite ‘/etc/sysconfig/docker’? yes
[root@master ~]# systemctl restart docker

[root@node~]# cp /etc/sysconfig/docker-backup /etc/sysconfig/docker
cp: overwrite ‘/etc/sysconfig/docker’? yes
[root@node~]# systemctl restart docker

Use watch oc get pods and wait until the docker-registry and router pods have moved to a status of Running and then press Ctrl+C to exit:

[root@master ~]# watch oc get pods
NAME                        READY     STATUS             RESTARTS   AGE
docker-registry-6-y84m8     1/1       Running            0          1m
registry-console-1-8bmr4    0/1       ImagePullBackOff   0          1m
registry-console-1-deploy   1/1       Running            0          20m
router-1-00nd2              1/1       Running            0          1m

From above status you will see the registry-console pod will not have a status of Running because the default configuration of the OCP installer tries to pull the registry-console image from registry.access.redhat.com. It may have a status of ImagePullBackOff, ErrImagePull, or Error.

Modify the deployment configuration for the registry console to point to workstation.test.example.com:5000, and then verify that all pods are running:

[root@master ~]# oc edit dc registry-console

it will open vi buffer, change public red hat registry address to private workstation registry. Search below line.

image: registry.access.redhat.com/openshift3/registry-console:3.3

Replace it by below

image: workstation.lab.example.com:5000/openshift3/registry-console:3.3

Now wait for minute, you will see all pods are in running status.

[root@master ~]# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-6-oytdi    1/1       Running   0          1m
registry-console-2-wijvb   1/1       Running   0          20s
router-1-7n637             1/1       Running   0          1m

Reinstate OpenShift package exclusions on both the master and node hosts to ensure that future package updates do not impact OpenShift:

[root@master~]# atomic-openshift-excluder exclude

[root@node ~]# atomic-openshift-excluder exclude

Step 9: Verify that the default router pod accepts requests from the DNS wildcard domain:

[root@master ~]# curl http://myapp.cloudapps.test.example.com

Step 10: Modify ImageStream to store and pool images from internal registry.

[root@master ~]# oc edit is -n openshift

The above command opens up a vi buffer which can be edited.

replace all occurrences of registry.access.redhat.com with workstation.test.example.com:5000:

:%s/registry.access.redhat.com/workstation.lab.example.com:5000

 

At this stage installation of Openshift Platform is completed, in next article we will see how to create users, project and resources in openshift cluster and also see how to deploy simple application on openshift platform.

 

Overview of Red Hat Openshift Enterprise

 

Openshift is a container platform developed by Red Hat to deploy, develop and run applications.
It is designed using upstream community project Openshit Origin. Openshift Origin basically provides an open source application container platform. All source code for the Origin project is available under the Apache License (Version 2.0) on GitHub.

Openshift Origin used in OpenShift Online, OpenShift Dedicated, and OpenShift Container Platform which are different software products by Red Hat. It is Built around a core of Docker container packaging and Kubernetes container cluster management, Origin is augmented by application lifecycle management functionality and DevOps tooling.

OpenShift Online is Red Hat’s public cloud application development and hosting service.
OpenShift Dedicated is Red Hat’s managed private cluster offering, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux.
OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat’s on-premise private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux.

If you do not have to manage your own data center then you can use OpenShift Online by Red Hat, a public cloud platform provided by Red Hat.

openshift1

Openshift Enterprise builds on other different opensource project such as Atomic, Docker, and Kubernetes. Openshift services provide additional authentication, security, scheduling, networking, storage, and application life-cycle management over standard Kubernetes orchestration.

Applications are running as a container inside Openshift Enterprise with isolated from each other on single operating system. Containers has some benefits over Virtual Machines. As container are light weight virtual machines with minimal operating system packages and application dependencies installed. Each container has separate storage and network isolation. This leads to deploy applications rapidly inside the container.

Following diagram shows software stack included in Openshift Enterprise Product.

OpenShift_v3_Stack_Final_0

From above diagram starting from bottom,
1) Base Operating System (Red Hat Enterprise Linux).

2) Docker: A container platform service.

3) Kubernetes: It is Orchestration tool designed and developed by Google and written in Go programming language. It used to manage the deployment of containers using templates.

4) Containerized Service: fulfill many PaaS infrastructure functions such as networking and authorization. Some of them run all the time, while others are started on demand. Run times and xPaaS are base container images ready for use by developers, each preconfigured with a particular runtime language or database.

5) DevOps tools and user experience: Openshift provides Web and CLI management tools for developers and system administrators, allowing the configuration and monitoring of both applications and Openshift services and resources.

In upcoming articles, I will demonstrate you, how to install and configure Openshift Enterprise on Red Hat Enterprise Linux.