Role of Device Special File in HP-UX

Device Special File Overview

UNIX applications access peripheral devices such as tape drives, disk drives, printers,

terminals, and modems via special files in the /dev directory called Device Special Files

(DSFs). Every peripheral device typically has one or more DSFs.

DSF Attributes

DSF file attributes determine which device a DSF accesses, and how

  • Type: Access the device in block or character mode?
  • Permissions: Who can access the device?
  • Major#: Which kernel driver does the DSF use?
  • Minor#: Which device does the DSF use? And how?
  • Name: What is the DSF name?

Use ll to view a device file’s attributes

# ll /dev/*disk/disk*

brw-r—– 1 bin sys 3 0x000004 Jun 23 00:34 /dev/disk/disk30

brw-r—– 1 bin sys 3 0x000005 Jun 23 00:34 /dev/disk/disk31

crw-r—– 1 bin sys 22 0x000004 Jun 23 00:34 /dev/rdisk/disk30

crw-r—– 1 bin sys 22 0x000005 Jun 23 00:34 /dev/rdisk/disk31

The lsdev command lists the drivers configured in the kernel, and their associated major

numbers.

 

# lsdev

Character Block Driver Class

22 3 esdisk disk

23 -1 estape tape

DSF Types: Legacy vs. Persistent

  • 11i v1 and v2 only support “legacy” DSFs
  • 11i v3 still supports legacy device files, but introduces new “persistent” DSFs

DSF Directories

DSFs are stored in a directory structure under /dev

Persistent DSFs- /dev/disk, /dev/rdisk, /dev/rtape, /dev/rchgr

Legacy DSF- /dev/dsk, /dev/rdsk, /dev/rmt, /dev/rac

Legacy DSF Names

Legacy DSF names are based on a device path’s controller instance, target, and LUN

# ioscan –kf

Class I H/W Path Description

=====================================================

ext_bus 5 1/0/2/1/0.6.1.0.0 FCP Array Interface

disk 3 1/0/2/1/0.6.1.0.0.0.1 HP HSV101  1st path

ext_bus 7 1/0/2/1/0.6.2.0.0 FCP Array Interface

disk 6 1/0/2/1/0.6.2.0.0.0.1 HP HSV101  2nd path

ext_bus 9 1/0/2/1/1.1.2.0.0 FCP Array Interface

disk 9 1/0/2/1/1.1.2.0.0.0.1 HP HSV101  3rd path

ext_bus 11 1/0/2/1/1.1.3.0.0 FCP Array Interface

disk 12 1/0/2/1/1.1.3.0.0.0.1 HP HSV101  4th path

/dev/dsk/c11t0d1options

 

Persistent DSF Names

# ioscan –kfNn

Class I H/W Path Driver S/W State H/W Type Description

=================================================================

disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV101

/dev/disk/disk30options

LUN, Disk, and DVD DSF Names

Legacy DSFs(Blcok DSF)

/dev/dsk/c5t0d1

/dev/dsk/c7t0d1

/dev/dsk/c9t0d1

/dev/dsk/c11t0d1

Legacy DSFs(RAW DSF)

/dev/rdsk/c5t0d1

/dev/rdsk/c7t0d1

/dev/rdsk/c9t0d1

/dev/rdsk/c11t0d1

Persistent DSF(Block DSF)

/dev/disk/disk30

Persistent DSF(RAW DSF)

/dev/rdisk/disk30

 

Boot Disk DSF Names

Integrity boot disks are subdivided into three “EFI” disk partitions

  • Each EFI partition requires block and raw DSFs

− Legacy DSFs identify EFI partitions via suffixes s1, s2, s3

− Persistent DSFs identify EFI partitions via suffixes p1, p2, p3

  • Though not shown below, boot disks may be multi-pathed, too

Execute the lvlnboot –v command to determine your boot disk device file

# lvlnboot –v

Boot Definitions for Volume Group /dev/vg00:

Physical Volumes belonging in Root Volume Group:

/dev/disk/diska_p2 — Boot Disk

Boot: lvol1 on: /dev/disk/diska_p2

Root: lvol3 on: /dev/disk/diska_p2

Swap: lvol2 on: /dev/disk/diska_p2

Dump: lvol2 on: /dev/disk/diska_p2, 0

Legacy Block DSF for boot partition

dev/dsk/c0t1d0

/dev/dsk/c0t1d0s1

/dev/dsk/c0t1d0s2

/dev/dsk/c0t1d0s3

Legacy RAW DSF for boot partition

/dev/rdsk/c0t1d0

/dev/rdsk/c0t1d0s1

/dev/rdsk/c0t1d0s2

/dev/rdsk/c0t1d0s3

Persistent Block DSF for boot partition

/dev/disk/disk27

/dev/disk/disk27_p1

/dev/disk/disk27_p2

/dev/disk/disk27_p3

Persistent RAW DSF for boot partition

/dev/rdisk/disk27

/dev/rdisk/disk27_p1

/dev/rdisk/disk27_p2

/dev/rdisk/disk27_p3

 

Tape Drive DSF Names

Feature                                                                        Legacy DSF in /dev/rmt                              Persistent DSF /dev/rtape
Best density, autorewind, AT&T style                     c0t0d0BEST + 0m                                       tape0_BEST
Best density, no autorewind, AT&T style                c0t0d0BESTn + 0mn                                 tape0_BESTn
Best density, autorewind, Berkeley style                c0t0d0BESTb + 0mb                                tape0_BESTb
Best density, no autorewind, Berkeley style           c0t0d0BESTnb + 0mnb                          tape0_BESTnb

 

Listing Legacy DSFs

# ioscan –kfn list all devices and their legacy DSFs

# ioscan -kfnC disk list all disk class devices and their legacy DSFs

# ioscan -kfnC tape list all tape class drives and their legacy DSFs

# ioscan –kfnH 0/0/1/0/0.0.0 list a specific device/path and its legacy DSFs

# ioscan –kfn /dev/rmt/0m list a specific device/path and its legacy DSFs

 

Listing Persistent DSFs

# ioscan –kfnN list all devices and their persistent DSFs

# ioscan -kfnNC disk list all disk class devices and their persistent DSFs

# ioscan -kfnNC tape list all tape class drives and their persistent DSFs

# ioscan –kfnNH 64000/0xfa00/0x0 list a specific device and its persistent DSFs

# ioscan –kfnN /dev/rtape/tape0 list a specific device and its persistent DSFs

Correlating Persistent DSFs with LUNs and lunpaths

# ioscan –m lun

Class I H/W Path Driver SW State H/W Type Health Description

====================================================================

disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101

1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

1/0/2/1/0.0x50001fe150031128.0x4001000000000000

1/0/2/1/1.0x50001fe15003112d.0x4001000000000000

1/0/2/1/1.0x50001fe150031129.0x4001000000000000

/dev/disk/disk30 /dev/rdisk/disk30

# ioscan –m lun -H 64000/0xfa00/0x4

Class I H/W Path Driver SW State H/W Type Health Description

====================================================================

disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101

1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

1/0/2/1/0.0x50001fe150031128.0x4001000000000000

1/0/2/1/1.0x50001fe15003112d.0x4001000000000000

1/0/2/1/1.0x50001fe150031129.0x4001000000000000

/dev/disk/disk30 /dev/rdisk/disk30

 

# ioscan –m lun –D /dev/disk/disk30

Class I H/W Path Driver SW State H/W Type Health Description

====================================================================

disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101

1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

1/0/2/1/0.0x50001fe150031128.0x4001000000000000

1/0/2/1/1.0x50001fe15003112d.0x4001000000000000

1/0/2/1/1.0x50001fe150031129.0x4001000000000000

/dev/disk/disk30 /dev/rdisk/disk30

 

Correlating Persistent DSFs with WWIDs:

View the WWID for all LUNs, or a specific LUN hardware path or DSF

# scsimgr get_attr -a wwid all_lun

SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30

name = wwid

current = 0x600508b400012fd20000a00000250000

default =

saved =

SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk31

name = wwid

current = 0x600508b400012fd20000900001900000

default =

# scsimgr get_attr -a wwid -H 64000/0xfa00/0x4

SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30

name = wwid

current = 0x600508b400012fd20000a00000250000

default =

saved =

# scsimgr get_attr -a wwid -D /dev/rdisk/disk30

SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30

name = wwid

current = 0x600508b400012fd20000a00000250000

default =

saved =

Recall that you can also use the scsimgr command to obtain a LUN’s LUNID.

# ioscan –m lun –D /dev/disk/disk30

Class I H/W Path Driver SW State H/W Type Health Description

====================================================================

disk 22 64000/0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101

1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

1/0/2/1/0.0x50001fe150031128.0x4001000000000000

1/0/2/1/1.0x50001fe15003112d.0x4001000000000000

1/0/2/1/1.0x50001fe150031129.0x4001000000000000

/dev/disk/disk30 /dev/rdisk/disk30

# scsimgr get_attr \

-a lunid \

-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

name = lunid

current =0x4001000000000000 (LUN # 1, Flat Space Addressing)

default =

saved =

 

Correlating Persistent DSFs with Legacy DSFs:

Map all persistent DSFs to corresponding legacy DSFs

# ioscan –m dsf

Map a specific legacy DSF to an associated persistent DSF

# ioscan -m dsf /dev/dsk/c5t0d1

Persistent DSF Legacy DSF(s)

========================================

/dev/disk/disk30 /dev/dsk/c5t0d1

 

Map a specific persistent DSF to associated legacy DSFs:

# ioscan -m dsf /dev/disk/disk30

Persistent DSF Legacy DSF(s)

========================================

/dev/disk/disk30 /dev/dsk/c5t0d1

/dev/dsk/c7t0d1

/dev/dsk/c9t0d1

/dev/dsk/c11t0d1

 

Decoding Persistent and Legacy DSF Attributes:

Decode a legacy DSF’s major and minor numbers

# lssf /dev/rmt/c0t0d0BESTnb

stape card instance 0 SCSI target 0 SCSI LUN 0

Berkeley No-Rewind BEST density

at address 0/0/1/0/0.0.0 /dev/rmt/c0t0d0BESTnb

 

Decode a persistent DSF’s major and minor numbers:

# lssf /dev/rtape/tape0_BESTnb

estape Berkeley No-Rewind BEST density

at address 64000/0xfa00/0x0 /dev/rtape/tape0_BESTnb

 

Managing Device Files

  • HP-UX automatically creates DSFs for most devices during system startup
  • HP-UX 11i v3 automatically creates persistent DSFs for dynamically added LUNs, too
  • HP-UX also provides tools for manually creating and managing device files
  • insf Create default DSFs for auto-configurable devices
  • mksf Create non-default DSFs for auto-configurable devices
  • mknod Create custom DSFs for non-auto-configurable devices
  • rmsf Remove devices and DSFs

Creating DSFs via insf

Scan for new hardware

# ioscan

Create DSFs for newly added devices

# insf –v

insf: Installing special files for stape instance 0 address 0/1/1/1.4.0

insf: Installing special files for estape instance 1 address

64000/0xfa00/0x0

making rtape/tape1_BEST c 23 0x000009

making rtape/tape1_BESTn c 23 0x00000b

making rtape/tape1_BESTb c 23 0x00000c

making rtape/tape1_BESTnb c 23 0x00000d

 

Create DSFs for new devices and re-create missing DSFs for existing devices

# insf –v –e

insf: Installing special files for stape instance 0 address 0/1/1/1.4.0

insf: Installing special files for estape instance 1 address

64000/0xfa00/0x9

making rtape/tape1_BEST c 23 0x000009

making rtape/tape1_BESTn c 23 0x00000b

making rtape/tape1_BESTb c 23 0x00000c

making rtape/tape1_BESTnb c 23 0x00000d

(creates DSFs for all other devices, too)

 

Create or recreate DSFs for a specific hardware path or class

# insf –v –e -H 64000/0xfa00/0x0

insf: Installing special files for estape instance 1 address

64000/0xfa00/0x0

making rtape/tape1_BEST c 23 0x000009

making rtape/tape1_BESTn c 23 0x00000b

making rtape/tape1_BESTb c 23 0x00000c

making rtape/tape1_BESTnb c 23 0x00000d

 

# insf –v –e –C estape

insf: Installing special files for stape instance 0 address 0/1/1/1.4.0

insf: Installing special files for estape instance 1 address

64000/0xfa00/0x0

making rtape/tape1_BEST c 23 0x000009

making rtape/tape1_BESTn c 23 0x00000b

making rtape/tape1_BESTb c 23 0x00000c

making rtape/tape1_BESTnb c 23 0x00000d

 

Creating DSFs via mksf:

Use mksf to configure device files for other unusual combinations of options;

Configure a DDS2, no-rewind DSF for the tape drive at 64000/0xfa00/0x0 :

# mksf –v –H 64000/0xfa00/0x0 –b DDS2 –n

Creating DSFs via mknod

Device File Name Block/Character Major# Minor#

  • If a device isn’t configurable via insf or mksf,

manually create DSFs with custom major/minor numbers using mknod

  • mknod must be used to create LVM volume group DSFs,

and may be necessary to create DSFs for other vendors’ devices

# mknod /dev/vg01/group c 64 0x010000

Removing DSFs via rmsf

List DSFs associated with non-existent “stale” devices (11i v3 only)

# lssf -s

Remove DSFs associated with non-existent “stale” devices (11i v3 only)

# rmsf –v –x

Remove a specific DSF

# rmsf –v /dev/disk/disk1

Remove all of the device files associated with a device, and the device definition

# rmsf –v -a /dev/disk/disk1

Or … specify the device’s hardware path

# rmsf –v –H 64000/0xfa00/0x1

Disabling and Enabling Legacy Mode DSFs

Determine whether legacy mode is currently enabled

# insf -v -L

Disable legacy mode and remove legacy mode DSFs

# rmsf –v –L

Re-enable legacy mode and recreate legacy DSFs

# insf –L

Advertisements

Configure New Hardware in HP-UX 11i v3

HP-UX systems have several hardware components:

  • One or more Itanium single-, dual-, or quad-core CPUs for processing data
  • One or more Cell Boards or Blades hosting CPU and memory
  • One or more System/Local Bus Adapters that provide connectivity to expansion buses
  • One or more PCI I/O expansion buses with slots for add-on Host Bus Adapters
  • One or more Host Bus Adapter cards for connecting peripheral devices
  • One or more Core I/O cards with built-in LAN, console, and boot disk connectivity
  • An iLO / Management Processor to provide console access and system management

Determining your Processor Type

On 11i v1 and v2 systems, you can determine your processor type via the SAM system

properties screen.

# sam -> Performance Monitors -> System Properties -> Processor

On Integrity systems, you can determine your processor type and configuration via the

machinfo command.

# machinfo

Online Replacement, Addition, Deletion (Interface Card OL*)

Some of the entry-class servers, and all of the current mid-range and high-end servers, now

support HP’s Interface Card OL* functionality, which makes it possible to add and replace

(11i v1, v2, and v3), or remove (11i v3 only) interface cards without shutting down the

system.

# olrad –q

 

Integrity Server Overview

Rackmount & Cell-Based Integrity Servers

High-End Cell-Based Server:

HP Integrity Superdome (64p/128c)

Mid-Range Cell-Based Servers:

HP Integrity rx8640 (16p/32c)

HP Integrity rx7640 (8p/16c)

 

Entry-Class rackmount Servers:

HP Integrity rx2800 i2 (2p/8c) New!

HP Integrity rx6600 (4p/8c)

HP Integrity rx3600 (2p/4c)

HP Integrity rx2660 (2p/4c)

 

 

Blade-Based Integrity Servers

High-End Server:

HP Integrity Superdome 2 (32p/128c) New!

 

Blade Servers:

Integrity BL890c i2 Blades (8p/32c) New!

Integrity BL870c i2 Blades (4p/16c) New!

Integrity BL860c i2 Blades (2p/8c) New!

Integrity BL870c Blades (4p/8c)

Integrity BL860c Blades (2p/4c)

 

Viewing the System Configuration View the system model string

# model

# uname –a

 

View processor, memory, and firmware configuration information

# machinfo

 

View cell boards, interface cards, peripheral devices, and other components

# ioscan all components

# ioscan –C cell                 :To view cell board class components

# ioscan –C lan                  :To view LAN interface class components

# ioscan –C disk                 :To View disk class devices

# ioscan –C fc                      :To view fibre channel interfaces

# ioscan –C ext_bus          :To view fibre channel interfaces

# ioscan –C processor      :To view processors

# ioscan –C tty                   :To view serial (teletype) class components

 

Hardware Addresses

Legacy vs. Agile View Hardware Addresses

  • 11i v1 and v2 implement a “legacy” mass storage stack and addressing scheme
  • 11i v3 implements a new mass storage stack, with many new enhancements
  • 11i v3 uses new “agile view” addresses, but still supports legacy addresses, too

 

Legacy HBA Hardware Addresses

1/0/0/2/0

Cell /SBA/LBA/device/function

 

Legacy Parallel SCSI Hardware Addresses

1/0/0/2/0.1.0

HBA hardware address/Device/ Target/ LUN ID

 

Legacy FC Hardware Addresses

1/0/2/1/0.6.1.0.0.0.1

HBA hardware address /              SAN domain/area/                port Array LUN ID

 

Viewing Legacy HP-UX Hardware Addresses

# ioscan      //short listing of all devices

# ioscan -f  //full listing of all devices

# ioscan –kf   // full listing, using cached information

# ioscan -kfH 0/0/0/3/0    //full listing of all devices below 0/0/0/3/0

# ioscan -kfC disk    //full listing of “disk” class devices

 

Agile View HBA Hardware Addresses

1/0/0/2/0

Cell/ SBA /LBA /device/function

Agile View Parallel SCSI Hardware Addresses

1/0/0/2/0.0xa.0x0

HBA hardware address/                                   Target LUN ID

 

Agile View FC Lunpath Hardware Addresses

1/0/2/1/0.0x64bits.0x64bits

HBA hardware address                        /WW Port Name                           /LUN Address

 

Agile View FC LUN Hardware Path Addresses

64000/0xfa00/0x4

virtual root node              /virtual bus              /virtual LUN ID

Viewing LUN Hardware Paths via Agile View

Search and list all devices using legacy hardware addresses.

# ioscan

 

Search and list all devices using Agile View addresses.

# ioscan –N

 

Display a kernel-cached full list of devices using Agile View addressing.

# ioscan –kfN

Display a kernel-cached listing of disk class devices using Agile View addressing.

# ioscan –kfNC disk

Display a kernel-cached listing of a device at a specific hardware path.

# ioscan –kfNH 64000/0xfa00/0x4

Viewing LUNs and their lunpaths via Agile View

# ioscan –m lun [-H 64000/0xfa00/0x4]

# ioscan –m lun

Viewing HBAs and their lunpaths via Agile View

# ioscan –kfNH 1/0/2/1/0

Viewing LUN Health via Agile View

Report the health status of all disks/LUNs.

# ioscan –P health –C disk

 

Report the health status of a specific disk/LUN, or fibre channel adapter.

# ioscan –P health –H 64000/0xfa00/0x4

 

Report the status of all fibre channel adapters.

# ioscan –P health –C fc

 

Report the health status of a specific fibre channel adapter and its lunpaths.

# ioscan –P health –H 1/0/2/1/0

Viewing LUN Attributes via Agile View

Use a LUN hardware path to determine a disk’s WWID

 

# scsimgr get_attr -a wwid [all_lun]|[-H 64000/0xfa00/0x4]

name = wwid

current = 0x600508b400012fd20000a00000250000

default =

saved =

 

Use one of the LUN’s lunpath hardware addresses to determine a disk’s LUNID

# scsimgr get_attr \

-a lunid \

-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

name = lunid

current =0x4001000000000000 (LUN # 1, Flat Space Addressing)

default =

saved =

Obtaining LUN IDs

# scsimgr get_attr \

-a lunid \

-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

name = lunid

current =0x4001000000000000 (LUN # 1, Flat Space Addressing)

default =

saved =

Enabling and Disabling lunpaths via Agile View

Disable a lunpath

# scsimgr -f disable

–H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

disabled successfully

 

Determine lunpath status

# ioscan -P health -H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

Class I H/W Path health

===================================================================

lunpath 5 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000 disabled

 

Reenable a lunpath

# scsimgr enable

-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000

enabled successfully

 

Slot Address Overview

  • HP-UX hardware addresses are useful when managing devices, but …
  • HP-UX slot addresses identify an interface card’s physical location on the system
  • Interface cards slot addresses provide the following information:

− The slot’s cabinet

− The slot’s I/O bay

− The slot’s I/O chassis

− The slot’s slot number

Viewing Slot Addresses

# olrad -q

Driver(s) Capable

Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode

Num Spd Mode

0-0-1-1 1/0/8/1 396 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-2 1/0/10/1 425 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-3 1/0/12/1 454 266 266 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-4 1/0/14/1 483 266 66 On Yes No Yes Yes PCI-X PCI

0-0-1-5 1/0/6/1 368 266 66 On Yes No Yes Yes PCI-X PCI

0-0-1-6 1/0/4/1 340 266 266 On Yes No Yes Yes PCI-X PCI-X

0-0-1-7 1/0/2/1 312 133 133 On Yes No Yes Yes PCI-X PCI-X

0-0-1-8 1/0/1/1 284 133 133 On Yes No Yes Yes PCI-X PCI-X

# rad -q

Slot Path Bus Speed Power Occupied Suspended Capable

0-0-0-1 0/0/8/0 64 66 On Yes No Yes

0-0-0-2 0/0/10/0 80 66 On Yes No Yes

0-0-0-3 0/0/12/0 96 66 On Yes No Yes

Managing Card and Devices

Installing Interface Cards w/out OL* (11i v1, v2, v3)

1)Verify card compatibility

2)Verify that the required driver is configured in the kernel

3)Properly shutdown and power off the system

4)Install the interface card

5)Power up

6)Run ioscan to verify that the card is recognized

 

Installing Interface Cards with OL* (11i v1)

  • Installing a new interface card with OL* in 11i v1:
  • Verify card compatibility
  • Verify that the required driver is configured in the kernel
  • Go to the SAM “Peripheral Devices -> Cards” screen
  • Select an empty slot from the object list
  • Select “Actions -> Light Slot LED” to identify the card slot
  • Select “Actions -> Add” to analyze the slot
  • Insert the card as directed
  • Wait for SAM to power on, bind, and configure the card

Check ioscan to verify that the card is recognized

Installing Interface Cards with OL* (11i v2, v3)

1) Installing a new interface card with OLAR:

2) Verify card compatibility

3) Verify that the required driver is configured in the kernel

4) Go to the SMH “Peripheral Device Tool -> OLRAD Cards” screen

5) Select an empty slot

6) Click “Turn On/Off Slot LED”

7) Click “Add Card Online”

8) Click “Run Critical Resource Analysis”

9) Click “Power Off” to power-off the slot

10) Insert the new card

11) Click “Bring Card Online”

Check ioscan to verify that the card is recognized

 

Installing New Devices (11i v1, v2, v3)

 

Configuring a new LUN or hot-pluggable device

  • Verify device compatibility
  • Verify that the required driver is configured in the kernel
  • Connect or configure the device
  • Run ioscan to add the device to the kernel I/O tree (not necessary in 11i v3)
  • Run insf to create device files (not necessary in 11i v3)
  • Run ioscan –kfn or ioscan –kfnN to verify the configuration

 

 

 

 

Configuring a new non-hot-pluggable device

  • Verify device compatibility
  • Verify that the required driver is configured in the kernel
  • Shutdown and power off the system
  • Connect the device

5)   Power-on and boot the system

6)   Run ioscan –kfn or ioscan –kfnN to verify the configuration

 

User Administration in HP-UX 11 i v3

Use the id command to determine a user’s UID and primary group membership.

# id user1

uid=301(user1) gid=301(class)

Use the groups command to determine a user’s secondary group memberships.

# groups user1

class class2 users

Three Key Files in User Administration

/etc/passwd

/etc/group

/etc/shadow – by default disabled.

To edit /etc/passwd file

#/usr/sbin/vipw

Use /usr/sbin/pwck to check the /etc/passwd file syntax.

 

Enable Long Username in HP-UX11.31

11i v3 supports usernames up to 255 characters in length. However, this

functionality must be manually enabled by temporarily stopping the pwgrd

password hashing daemon, executing the lugadmin (long username

groupname) command, and restarting pwgrd.

# /sbin/init.d/pwgr stop

pwgrd stopped

# lugadmin –e

Warning: Long user/group name once enabled cannot

be disabled in future.

Do you want to continue [yY]: y

lugadmin: Note: System is enabled for

long user/group name

# /sbin/init.d/pwgr start

pwgrd started

To determine if long usernames are enabled, execute lugadmin –l. 64

indicates that the maximum username length is 8 characters. 256 indicates

that long usernames are enabled.

# lugadmin –l

256

 

To determine your system’s maximum UID, check the MAXUID

parameter in /usr/include/sys/param.h.

 

Configuring Shadow Passwords:

By default, the /etc/shadow file doesn’t exist. Use the cookbook below to convert to a

shadow password system:

  1. Shadow password support is included by default in 11i v2 and v3. HP-UX 11i v1

administrators, however, must download and install the ShadowPassword patch bundle

from http://software.hp.com/. Use the swlist command to determine if the

product has already been installed.

# swlist ShadowPassword

  1. Run pwck to verify that there aren’t any syntax errors in your existing /etc/passwd file.

# pwck

  1. Use the pwconv command to move your passwords to the /etc/shadow file.

# pwconv

  1. Verify that the conversion succeeded. The /etc/passwd file should remain worldreadable,

but the /etc/shadow file should only be readable by root. The encrypted

passwords in /etc/passwd should have been replaced by “x”s.

# ll /etc/passwd /etc/shadow

-r–r–r– 1 root sys 914 May 18 14:35 /etc/passwd

-r——– 1 root sys 562 May 18 14:35 /etc/shadow

  1. You can revert to the traditional non-shadowed password functionality at any time via the

pwunconv command.

# pwunconv

 

Enabling SHA-512 Passwords in /etc/shadow:

Traditionally, HP-UX has used a variation of the DES encryption algorithm to encrypt user

passwords in /etc/passwd. HP-UX 11i v2 and v3 now support the more secure SHA-512

algorithm if you install the Password Hashing Infrastructure patch bundle from

http://software.hp.com. HP-UX 11i v3 also supports long passwords up to 255

characters if you add the LongPass11i3 patch bundle, too. Use the following commands to

determine if your system has these patch bundles:

In 11i v2:

# swlist SHA

In 11i v3:

# swlist PHI11i3 LongPass11i3

These patches are not available for 11i v1.

After installing the software, add the following two lines to /etc/default/security to

enable SHA512 password hashing:

# vi /etc/default/security

CRYPT_DEFAULT=6

CRYPT_ALGORITHMS_DEPRECATE=__unix__

 

Enabling Long Passwords in /etc/shadow:

On 11i v3 systems, you can also enable long passwords up to 255 characters in length by

adding this line to /etc/default/security:

# vi /etc/default/security

CRYPT_DEFAULT=6

CRYPT_ALGORITHMS_DEPRECATE=__unix__

LONG_PASSWORD=1

 

Creating User Accounts:

# useradd –o \ # allow a duplicate UID

-u 101 \ # define the UID

-g users \ # define the primary group

-G class,training \ # define secondary groups

-c “student user” \ # define the comment field

–m –d /home/user1 \ # make a home directory for the user

–s /usr/bin/sh \ # define the default shell

-e 1/2/09 \ # define an account expiration date

-p fnnmD.DGyptLU \ # specify an encrypted password

-t /etc/default/useradd \ # specify a template

user1 # define the username

 

Interactively set a password for the new account:

# passwd user1 # interactively specify a password or…

# passwd –d user1 # set a null password

# passwd –f user1 # force a password change at first login

 

Creating useradd Templates in /etc/default/

Administrators who manage many user accounts often configure useradd template files in

the /etc/default/ directory.

# useradd –D \ # update defaults for a template

-t /etc/default/useradd.cusers \ # template file location

-b /home \ # base for home directories

-c “C programmer” \ # comment

-g developer \ # primary group

-s /usr/bin/csh # default shell

 

To verify that the template was created, execute useradd with just the –D and –t options,

or simply cat the file.

# useradd -D -t /etc/default/useradd.cusers

GROUPID 20

BASEDIR /home

SKEL /etc/skel

SHELL /usr/bin/csh

INACTIVE -1

EXPIRE

COMMENT programmer

CHOWN_HOMEDIR no

CREAT_HOMEDIR no

ALLOW_DUP_UIDS no

 

The example below uses the new template to create a user account. Recall that –m creates a

home directory for the new user.

# useradd –m -t /etc/default/useradd.cusers user1

# tail -1 /etc/passwd

user1:*:101:20:programmer:/home/user1:/usr/bin/csh

 

Modifying User Accounts:

Modify a user account (Administrators):

# usermod –l user01 user1 # change the user’s username

# usermod –o -u 101 user1 # change the user’s UID

# usermod -g users user1 # change the user’s primary group

# usermod -G class,training user1 # change the user’s secondary group(s)

# usermod -c “student” user1 # change the user’s comment field

# usermod –m -d /home/user01 user1 # move the user’s home directory

# usermod –s /usr/bin/ksh user1 # change the user’s default shell

# usermod –e 1/3/09 user1 # change the user’s account expiration

# usermod -p fnnmD.DGyptLU user1 # non-interactively change a password

 

Modify a user password (Administrators):

# passwd user1 # interactively change a password

Modify a user account or password (Users):

$ passwd # change the user’s password

$ chsh user1 /usr/bin/ksh # change the user’s shell

$ chfn user1 # change the user’s comment field

 

Deactivate a user account

# passwd –l user1

Reactivate a user account

# passwd user1

Remove a user’s home directory

# rm –rf /home/user1

Or… Remove the user’s files from every directory

# find / -user user1 –type f –exec rm –i +

# find / -user user1 –type d –exec rmdir +

Or… Transfer ownership to a different user

# find / -user user1 –exec chown user2 +

 

Delete a user account, but leave the user’s files untouched

# userdel user1

Delete a user account and remove the user’s home directory

# userdel –r user1

Or… Remove the user’s files from every directory

# find / -user user1 –type f –exec rm –i +

# find / -user user1 –type d –exec rmdir +

Or… Transfer ownership to a different user

# find / -user user1 –exec chown user2 +

 

Find files owned by non-existent users or groups

# find / -nouser –exec ll –d +

# find / -nogroup –exec ll -d +

 

Configuring Password Aging:

Password aging may be enabled via the /usr/bin/passwd command:

# passwd -n 7 -x 70 –w 14 user1

<min> argument rounded up to nearest week

<max> argument rounded up to nearest week

<warn> argument rounded up to nearest week

 

You can check the password status of a user’s account with the -s option.

# passwd -s user1

user1 PS 03/21/05 7 70 14

# passwd -sa

user1 PS 03/21/05 7 70 14

user2 PS

user3 PS

Configuring Password Policies:

 

# vi /etc/default/security

MIN_PASSWORD_LENGTH=

PASSWORD_MIN_UPPER_CASE_CHARS=

PASSWORD_MIN_LOWER_CASE_CHARS=

PASSWORD_MIN_DIGIT_CHARS=

PASSWORD_MIN_SPECIAL_CHARS=

PASSWORD_MAXDAYS=

PASSWORD_MINDAYS=

PASSWORD_WARNDAYS=

 

Managing Groups:

Create a new group

# groupadd -g 200 accts

Change a group name

# groupmod -n accounts accts

 

Add, modify, or delete a list of users to or from a group:

# groupmod –a –l user1,user2 accounts add a list of users to a group

# groupmod –m –l user3,user4 accounts replace the list of users in a group

# groupmod –d –l user3,user4 accounts delete a list of users from a group

 

Delete a group:

# groupdel accounts

Change a specific user’s primary and secondary group membership:

# usermod –g users user1

# usermod –G class,training user1

 

View a user’s group memberships:

# groups user1

Automating User Account Creation:

write a simple shell script to automatically create the user accounts. Initially, you can assign the null passwords, but force them to change their

passwords after their first successful login. Assign /usr/bin/sh as the users’ startup

shell.

Create a Shell script useradd_stud_accts.sh

#!/usr/bin/sh

n=1

while ((n<=50))

do

echo stud$n

useradd –m –s /usr/bin/sh stud$n

passwd –d –f stud$n

((n=n+1))

done

 

Make script executable and run:

# chmod +x useradd_stud_accts.sh

# ./useradd_stud_accts.sh

 

To clean up the accounts, create script userdel_stud_accts.sh.

#!/usr/bin/sh

n=1

while ((n<=50))

do

echo stud$n

userdel stud$n

rm -rf /home/stud$n

((n=n+1))

done

 

Managing Users and Groups via the SMH:

# smh -> Accounts for Users and Groups or…

# ugweb

 

Configure Persistent Image Registry in Openshift using NFS

In this article, we will see how to configure persistent image registry in Openshift by using NFS with Persistent Volume (PV) and Persistent Volume claim (PVC) resources.

By default Openshift installer configures default registry. Installer setup the volume for registry by exporting NFS volume from the master node. But this is not ideal for production setup. So we usually need to configure persistent storage for registry.

Verify that the OCP internal registry is running and includes a default PersistentVolumeClaim (PVC) named registry-claim.

Step 1: Login to master node with system user and select default project.

[root@master ~]# oc login -u system:admin

Logged into “https://master.lab.example.com:8443&#8221; as “system:admin” using existing credentials.

You have access to the following projects and can switch between them with ‘oc project <projectname>’:

* default

kube-system

logging

management-infra

openshift

openshift-infra

Using project “default”.

 

Step 2: Verify that the docker-registry pod is running and find the pod name

[root@master ~]# oc get pods

docker-registry-6-d21wk    1/1       Running   1          21h

registry-console-1-ph7zv   1/1       Running   1          21h

router-1-vi46b             1/1       Running   1          21h

Step 3: Verify the default persistent volume and persistent volume claim created by the installer

[root@master ~]# oc get pv; oc get pvc

NAME              CAPACITY   ACCESSMODES   ..   STATUS    CLAIM

registry-volume   5Gi        RWX           ..   Bound     default/registry-claim

 

NAME             STATUS    VOLUME            CAPACITY   ACCESSMODES   AGE

registry-claim   Bound     registry-volume   5Gi        RWX           13h

 

Step 4: Use the oc volume pod command to determine if the docker-registry pod identified in above step has a PVC defined as registry-claim

[root@master ~]# oc volume pod docker-registry-6-d21wk

pods/docker-registry-6-d21wk

pvc/registry-claim (allocated 5GiB) as registry-storage

mounted at /registry

secret/registry-certificates as volume-a579i

mounted at /etc/secrets

secret/registry-token-fnw7y as registry-token-fnw7y

mounted at /var/run/secrets/kubernetes.io/serviceaccount

 

Step 5: Find the registry DeploymentConfig name

[root@master ~]# oc status

In project default on server https://master.lab.example.com:8443

https://docker-registry-default.cloudapps.lab.example.com (passthrough) to pod port 5000-tcp (svc/docker-registry)

dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.4.0.39

deployment #6 deployed 13 hours ago – 1 pod

 

Step 6: Verify that the pod mounts the default PVC to /registry from the default registry DeploymentConfig

[root@master ~]# oc volume dc docker-registry

deploymentconfigs/docker-registry

pvc/registry-claim (allocated 5GiB) as registry-storage

mounted at /registry

secret/registry-certificates as volume-dad50

mounted at /etc/secrets

 

Step 7: Verify that the current registry DeploymentConfig shows volumes and volumeMounts attributes

[root@master ~]# oc get dc docker-registry -o json | less

“spec”: {

“volumes”: [

{

“name”: “registry-storage”,

“persistentVolumeClaim”: {

“claimName”: “registry-claim”

}

},

“volumeMounts”: [

{

“name”: “registry-storage”,

“mountPath”: “/registry”

},

Step 8: Create NFS share from master host and export it with nfsnobody user. The reason behind this is each container has random UID, in that case NFS share will not accessible inside pod.

[root@master ~]# mkdir -p /var/export/registryvol

[root@master ~]# chown nfsnobody:nfsnobody /var/export/registryvol

[root@master ~]# chmod 700 /var/export/registryvol

Export the folder

[root@master ~]# vi /etc/exports.d/training-registry.exports

/var/export/registryvol *(rw,async,all_squash)

Save and exit file.

[root@master ~]# exportfs –a

[root@master ~]# showmount –e

Export list for master.lab.example.com:

/var/export/registryvol *

 

Step 9: On master host create new Persistent Volume (PV) resource which will use NFS share from master host, following are resource definition of PV in json format.

[root@master ~]# vi training-registry-volume.json

{

“apiVersion”: “v1”,

“kind”: “PersistentVolume”,

“metadata”: {

“name”: “training-registry-volume”,

“labels”: {

“deploymentconfig”: “docker-registry”

}

},

“spec”: {

“capacity”: {

“storage”: “10Gi”

},

“accessModes”: [ “ReadWriteMany” ],

“nfs”: {

“path”: “/var/export/registryvol”,

“server”: “master.lab.example.com”

}

}

}

Step 10: Create PV using oc create command and check PV status.

[root@master ~]# oc create –f training-registry-volume.json

persistentvolume “training-registry-volume” created

[root@master ~]# oc get pv

NAME                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM

registry-volume            5Gi        RWX           Retain          Bound       default/registry-claim

training-registry-volume   10Gi       RWX           Retain          Available

Step 11: On master host create Persistent Volume Claim (PVC) Definition.

[root@master ~]#  vi /root/DO280/labs/deploy-registry/training-registry-pvclaim.json

{

“apiVersion”: “v1”,

“kind”: “PersistentVolumeClaim”,

“metadata”: {

“name”: “training-registry-pvclaim”,

“labels”: {

“deploymentconfig”: “docker-registry”

}

},

“spec”: {

“accessModes”: [ “ReadWriteMany” ],

“resources”: {

“requests”: {

“storage”: “10Gi”

}

}

}

}

Step 12: Create PVC using oc create command and check PVC status.

[root@master ~]# oc create -f training-registry-pvclaim.json

persistentvolumeclaim “training-registry-pvclaim” created

[root@master ~]# oc get pvc

NAME                        STATUS    VOLUME                     CAPACITY   ACCESSMODES   AGE

registry-claim              Bound     registry-volume            5Gi        RWX           17h

training-registry-pvclaim   Bound     training-registry-volume   10Gi       RWX           55s

 

Step 13: Attach PV to deployment config of docker registry with oc volume command as below.

[root@master ~]# oc volume dc docker-registry \

–add –overwrite -t pvc \

–claim-name=training-registry-pvclaim –name=registry-storage

deploymentconfig “docker-registry” updated

 

Note: where, –claim-name specifies the PVC name and –name specifies the pod volume name.

 

Step 14: Verify that the DeploymentConfig of docker-registry was changed to use the new PVC

[root@master ~]# oc get dc docker-registry -o json  | less

“spec”: {

“volumes”: [

{

“name”: “registry-storage”,

“persistentVolumeClaim”: {

“claimName”: “training-registry-pvclaim”

}

},

Step 15: Verify that the DeploymentConfig docker-registry started a new registry pod after detecting that the deployment configuration had been changed

[root@master ~]# watch oc status -v

In project default on server https://master.lab.example.com:8443

https://docker-registry-default.cloudapps.lab.example.com (passthrough) to pod port 5000-tcp (svc/docker-registry)

dc/docker-registry deploys docker.io/openshift3/ose-docker-registry:v3.4.0.39

deployment #7 deployed about a minute ago – 1 pod

deployment #6 deployed 17 hours ago

 

Step 16: Verify docker registry pod is running.

[root@master ~]# oc get pods

NAME                       READY     STATUS    RESTARTS   AGE

docker-registry-7-1gwd4    1/1       Running   0          9m

registry-console-1-zlrry   1/1       Running   2          17h

router-1-32toa             1/1       Running   2          17h

 

Finally we completed configuration of openshift image registry with persistent storage i.e using NFS.

Installation of Red Hat Openshift Platform

In this article we will see, how to install Openshift platform step by step on Red hat enterprise Linux 7. This installation includes three machines. In which one node work as master and another will host pods (collection of containers) and third node is workstation will host private image registry for openshift.

Master runs Openshift core services such as authentication, Kubernetes master services, Etcd daemons, Scheduler and Management/Replication while Node runs applications inside containers, which are in turn grouped into pods also it runs Kubernetes kubelet and kube-proxy daemons.

The Kubernetes scheduling unit is the pod, which is a grouping of containers sharing a virtual network device, internal IP address, TCP/UDP ports, and persistent storage. A pod can be anything from a complete enterprise application, including each of its layers as a distinct container, to a single microservice inside a single container. For example, a pod with one container running PHP under Apache and another container running MySQL.

Kubernetes also manage replica to scale pods. A replica is a set of pods sharing the same definition. For example, a replica consisting of many Apache+PHP pods running the same container image could be used for horizontally scaling a web application.

Following figure shows typical working of Openshift cloud platform.

openshift_working

Prior to installation make sure all systems are subscribed and connected to Red hat subscription management not to the RHN. Following subscriptions are required for Openshift installation.

OpenShift Container Platform subscriptions version 3.0 or 3.4, RHEL channel (rhel-7-server-rpms), rhel-7-server-extras-rpms required for docker installation, and rhel-7-server-optional-rpms.

To enable the required channels, use the command subscription-manager repos –enable.

Pr-requisite before installation:

  • Configure password less SSH between Master and Node.
  • Master and Node both must have static IP Address with resolvable DNS hostnames.
  • NetworkManager service must be enable and running on Master and Node.
  • Firewall service must be disable.
  • Configure wild card DNS zone. This needed by Openshift router (openshift router is basically pod which runs on node).

Installation procedure:

Master Server: master.test.example.com 172.25.0.10

Node Server: node.test.example.com 172.25.0.11

Workstation Server: workstation.test.example.com 172.25.0.9

Sub-domain Name: cloudapps.test.example.com

Step 1: Configure password less SSH between Master and Node server.

[root@master ~]# ssh-keygen -f /root/.ssh/id_rsa -N ”
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
F5:8e:39:3d:a6:64:66:c7:3c:03:cb:fd:48:7a:26:e9
root@master.test.example.com
The key’s randomart image is:
+–[ RSA 2048]—-+
|                 |
|                 |
|          .      |
|         . .     |
|        S . .    |
|         . @     |
|          @. &   |
|         =oBo*   |
|         .E+. .  |
+—————–+

Copy SSH key to Node Server as well as Master server itself, the reason is Openshift installer will copy installation files from Master server to Node server.

[root@master ~]# ssh-copy-id root@node.test.example.com

[root@master ~]# ssh-copy-id root@master.test.example.com

Step 2: Stop and Disable firewalld service.

[root@master ~]# systemctl stop firewalld

[root@master ~]# systemctl disable firewalld

[root@node ~]# systemctl stop firewalld

[root@node ~]# systemctl disable firewalld

Step 3: Copy SSL certificate from workstation to Master and Node server. (Pls. see post how to configure Private Image Registry on workstation….)

[root@master ~]# scp root@workstation:/etc/pki/tls/certs/example.com.crt \
/etc/pki/ca-trust/source/anchors/

Add certificate as from trusted source.
[root@master ~]# update-ca-trust extract

Repeat the same on Node server.

[root@node~]# scp root@workstation:/etc/pki/tls/certs/example.com.crt \
/etc/pki/ca-trust/source/anchors/

Add certificate as from trusted source.
[root@node~]# update-ca-trust extract

Step 4: Install Docker package and edit the docker configuration to setup internal private registry and block public docker registry.

[root@master ~]# yum install -y docker

[root@master ~]# /etc/sysconfig/docker

#ADD_REGISTRY=’–add-registry registry.access.redhat.com’
ADD_REGISTRY=’–add-registry workstation.test.example.com:5000′
BLOCK_REGISTRY=’–block-registry docker.io –block-registry registry.access.redhat.com’

save and exit file.

Repeat the same on Node server.

[root@node ~]# yum install -y docker

[root@node ~]# /etc/sysconfig/docker

#ADD_REGISTRY=’–add-registry registry.access.redhat.com’
ADD_REGISTRY=’–add-registry workstation.test.example.com:5000′
BLOCK_REGISTRY=’–block-registry docker.io –block-registry registry.access.redhat.com’

save and exit file..

Step 5: Setup storage for docker. create docker-storage-setup script inside /etc/sysconfig directory. specify device name, volume group name and enable LVM thin pool feature.

[root@master ~]#vi /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes

[root@master ~]# lvmconf –disable-cluster
[root@master ~]# docker-storage-setup

Repeat the same on Node server.

[root@node ~]#vi /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes

[root@node ~]# lvmconf –disable-cluster
[root@node ~]# docker-storage-setup

Examine newly created docker pool, this will host storage for docker container images.

[root@master ~]# lvs /dev/docker-vg/docker-pool
LV          VG        Attr       LSize Pool Origin Data%  Meta%    Move  Log  Cpy%Sync  Convert
docker-pool docker-vg twi-a-t— 10.45g            0.00   0.20

Start and enable docker service on both master and node server.

[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker

[root@node~]# systemctl start docker
[root@node~]# systemctl enable docker

Step 6: Install packages and images required by installer.

Following rpm package are required:

wget
git
net-tools
bind-utils
iptables-services
bridge-utils
atomic-openshift-docker-excluder
atomic-openshift-excluder
atomic-openshift-utils

Following container images are required:

openshift3/ose-haproxy-router
openshift3/ose-deployer
openshift3/ose-sti-builder
openshift3/ose-pod
openshift3/ose-docker-registry
openshift3/ose-docker-builder
openshift3/registry-console

Additionally following application images are required but are optional.

openshift3/ruby-20-rhel7
openshift3/mysql-55-rhel7
openshift3/php-55-rhel7
jboss-eap-6/eap64-openshift
openshift3/nodejs-010-rhel7

[root@master ~]# yum -y install atomic-openshift-docker-excluder \
atomic-openshift-excluder atomic-openshift-utils \
bind-utils bridge-utils git \
iptables-services net-tools wget

[root@node~]# yum -y install atomic-openshift-docker-excluder \
atomic-openshift-excluder atomic-openshift-utils \
bind-utils bridge-utils git \
iptables-services net-tools wget

Create following script to fetch images on both master and node server from workstation server.

[root@master~]# vi fetch.sh

#!/bin/bash

for image in \
openshift3/ose-haproxy-router openshift3/ose-deployer openshift3/ose-sti-builder \
openshift3/ose-pod openshift3/ose-docker-registry openshift3/ose-docker-builder \
openshift3/registry-console
do docker pull $image:v3.4.1.0; done

#runtime images
for image in \
openshift3/ruby-20-rhel7 openshift3/mysql-55-rhel7 openshift3/php-55-rhel7 \
jboss-eap-6/eap64-openshift  openshift3/nodejs-010-rhel7
do docker pull $image: done

#sample image
for image in \
openshift/hello-openshift php-quote
do docker pull $image; done

[root@master~]# bash fetch.sh

Check images using

[root@master ~]# docker images

copy script to node server

[root@master~]# scp fetch.sh root@node.test.example.com:/tmp/fetch.sh

[root@node~]# bash /tmp/fetch.sh

[root@node~]# docker images

Step 7: Run the installer.

Remove OpenShift package exclusions. When the atomic-openshift-excluder package was installed, it added an exclude line to the /etc/yum.conf file. The package exclusions need to be removed in order for the installation to succeed. Remove the package exclusions from the master and node hosts:

[root@master~]# atomic-openshift-excluder unexclude

[root@node ~]# atomic-openshift-excluder unexclude

Make copy of docker configuration file on both master and node.

[root@master ~]# cp /etc/sysconfig/docker /etc/sysconfig/docker-backup

[root@node~]# cp /etc/sysconfig/docker /etc/sysconfig/docker-backup

Now run Openshift installer on master server only.

[root@master ~]# atomic-openshift-installer install

The installer displays a list of pre-requisites and asks for confirmation to continue.

  • The installer asks the user to connect to remote hosts. Press Enter to continue.
  • The installers asks if you want to install OCP or a standalone registry. Press Enter to accept the default value of 1, which installs OCP.
  • The installer prompts for details about the master node. Enter master.test.example.com as the hostname of master, Enter y to confirm that this host will be the master, and press Enter to accept the default rpm option
  • You have added details for the OCP master. You also need to add an OCP node. Enter y in the Do you want to add additional hosts? prompt, enter node.test.example.com as the hostname of the node, Enter N to confirm that this host will not be the master, and press Enter to accept the default rpm option.
  • The OpenShift cluster will have only two hosts. Enter N at the Do you want to add additional hosts? prompt.
  • The installer asks if you want to override the cluster host name. Press Enter to accept the default value of None.
  • The installer prompts you for a host where the storage for the OCP registry will be configured. Press Enter to accept the default value of master.test.example.com.
  • Enter cloudapps.test.example.com as the DNS sub-domain for the OCP router.
  • Accept the default value of none for both the http and https proxy.
  • The installer prints a final summary based on your input and asks for confirmation. Ensure that the hostname and IP address details of master and node hosts are correct, and then enter y to continue.
  • Finally Enter y to start the installation.

The installation takes 15 to 20 minutes to complete depending on the CPU, memory and network capacity of servers. If installation is successful, you should see a “The installation was successful!” message at the end.

Verify node and pod status.

[root@master ~]# oc get nodes
NAME                    STATUS                   AGE
master.test.example.com  Ready,SchedulingDisabled 9m
node.test.example.com    Ready                    9m

Check the status of the pods that were created during the OCP installation:

[root@master ~]# oc get pods
NAME                        READY     STATUS              RESTARTS   AGE
docker-registry-6-deploy    0/1       ContainerCreating   0          12m
registry-console-1-deploy   0/1       ContainerCreating   0          11m
router-1-deploy             0/1       ContainerCreating   0          12m

 

Step 8: Configure Openshift router and registry.

By default openshift installer setup the router and registry automatically, The OpenShift router is the ingress point to all external traffic destined for applications inside the OCP cloud. It runs as a pod on schedulable nodes and may need some postinstallation adjustments for environments which don’t point to the Red Hat subscriber private registry.

Note: Openshift router run as a pod so ithas special constraint context security privilegde, so it can bind to TCP ports on the host itself. This provision already configured by installer. The default Router implementation provided by OCP is based on a container image running HAProxy.

When installing OCP in an offline environment, the base OCP platform docker images can be pulled from a private registry hosted on a server inside the network. If the docker configuration has been changed to point to the internal private docker registry, a bug in the OCP installer causes it to overwrite the registry location and point to the Red Hat subscribers registry at registry.access.redhat.com. This causes the router and docker-registry pods to fail to start after the OCP install process is complete.

To fix this issue, revert to an older version of the docker configuration file. (/etc/sysconfig/docker-backup).

[root@master ~]# cp /etc/sysconfig/docker-backup /etc/sysconfig/docker
cp: overwrite ‘/etc/sysconfig/docker’? yes
[root@master ~]# systemctl restart docker

[root@node~]# cp /etc/sysconfig/docker-backup /etc/sysconfig/docker
cp: overwrite ‘/etc/sysconfig/docker’? yes
[root@node~]# systemctl restart docker

Use watch oc get pods and wait until the docker-registry and router pods have moved to a status of Running and then press Ctrl+C to exit:

[root@master ~]# watch oc get pods
NAME                        READY     STATUS             RESTARTS   AGE
docker-registry-6-y84m8     1/1       Running            0          1m
registry-console-1-8bmr4    0/1       ImagePullBackOff   0          1m
registry-console-1-deploy   1/1       Running            0          20m
router-1-00nd2              1/1       Running            0          1m

From above status you will see the registry-console pod will not have a status of Running because the default configuration of the OCP installer tries to pull the registry-console image from registry.access.redhat.com. It may have a status of ImagePullBackOff, ErrImagePull, or Error.

Modify the deployment configuration for the registry console to point to workstation.test.example.com:5000, and then verify that all pods are running:

[root@master ~]# oc edit dc registry-console

it will open vi buffer, change public red hat registry address to private workstation registry. Search below line.

image: registry.access.redhat.com/openshift3/registry-console:3.3

Replace it by below

image: workstation.lab.example.com:5000/openshift3/registry-console:3.3

Now wait for minute, you will see all pods are in running status.

[root@master ~]# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-6-oytdi    1/1       Running   0          1m
registry-console-2-wijvb   1/1       Running   0          20s
router-1-7n637             1/1       Running   0          1m

Reinstate OpenShift package exclusions on both the master and node hosts to ensure that future package updates do not impact OpenShift:

[root@master~]# atomic-openshift-excluder exclude

[root@node ~]# atomic-openshift-excluder exclude

Step 9: Verify that the default router pod accepts requests from the DNS wildcard domain:

[root@master ~]# curl http://myapp.cloudapps.test.example.com

Step 10: Modify ImageStream to store and pool images from internal registry.

[root@master ~]# oc edit is -n openshift

The above command opens up a vi buffer which can be edited.

replace all occurrences of registry.access.redhat.com with workstation.test.example.com:5000:

:%s/registry.access.redhat.com/workstation.lab.example.com:5000

 

At this stage installation of Openshift Platform is completed, in next article we will see how to create users, project and resources in openshift cluster and also see how to deploy simple application on openshift platform.