Given below is the steps to create a service guard cluster in HP-UX. We will
use this cluster to run xclock application which will display servers clock to
windows desktop. We need to use display managers like xmanager to view it.
On both nodes add host details
vi /etc/hosts
192.168.1.1
ux-memoirs01.ux-memoirs.com ux-memoirs01
192.168.1.2
ux-memoirs02.ux-memoirs.com ux-memoirs02
127.0.0.1 localhost loopback
Create
shared VG
Create Physical volume
# pvcreate /dev/rdsk/c5t0d1
Creating "/etc/lvmtab_p".
Physical volume
"/dev/rdsk/c5t0d1" has been successfully created.
# pvcreate /dev/rdsk/c5t0d2
Physical volume
"/dev/rdsk/c5t0d2" has been successfully created.
Check disk is in LVM
# pvdisplay -l /dev/dsk/c5t0d1
/dev/dsk/c5t0d1:LVM_Disk=yes
Create VG
#mkdir /dev/vg01
#mknod /dev/vg01/group c 64 0x010000
# vgcreate /dev/vg01 /dev/dsk/c5t0d1
Increased the number of physical
extents per physical volume to 2303.
Volume group "/dev/vg01"
has been successfully created.
Volume Group configuration for
/dev/vg01 has been saved in /etc/lvmconf/vg01.conf
Configure Alternate
links
# vgextend /dev/vg01 /dev/dsk/c11t0d1
/dev/dsk/c5t0d1 /dev/dsk/c7t0d1 /dev/dsk/c9t0d1 /dev/dsk/c19t0d1
/dev/dsk/c13t0d1 /dev/dsk/c15t0d1 /dev/dsk/c17t0d1
vgextend: The physical volume
"/dev/dsk/c5t0d1" is already recorded in the "/etc/lvmtab"
file.
Volume Group configuration for
/dev/vg01 has been saved in /etc/lvmconf/vg01.conf
# vgextend /dev/vg01
/dev/dsk/c11t0d2 /dev/dsk/c5t0d2 /dev/dsk/c7t0d2 /dev/dsk/c9t0d2
/dev/dsk/c19t0d2 /dev/dsk/c13t0d2 /dev/dsk/c15t0d2 /dev/dsk/c17t0d2
Current path
"/dev/dsk/c11t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c7t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c9t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c19t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c13t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c15t0d1" is an alternate link, skip.
Current path
"/dev/dsk/c17t0d1" is an alternate link, skip.
Volume group "/dev/vg01"
has been successfully extended.
Volume Group configuration for
/dev/vg01 has been saved in /etc/lvmconf/vg01.conf
Create LV
# lvcreate -L 100 -n mcsg /dev/vg01
Logical volume
"/dev/vg01/mcsg" has been successfully created with
character device
"/dev/vg01/rmcsg".
Logical volume
"/dev/vg01/mcsg" has been successfully extended.
Volume Group configuration for
/dev/vg01 has been saved in /etc/lvmconf/vg01.conf
Mirror LV
# lvextend -m 1 /dev/vg01/mcsg
The newly allocated mirrors are now
being synchronized. This operation will
take some time. Please wait ....
Logical volume "/dev/vg01/mcsg"
has been successfully extended.
Volume Group configuration for
/dev/vg01 has been saved in /etc/lvmconf/vg01.conf
Create FS
# newfs -F vxfs /dev/vg01/rmcsg
version 7 layout
102400 sectors, 102400 blocks of
size 1024, log size 1024 blocks
largefiles supported
Mount FS and verify
# mount /dev/vg01/mcsg /mcsg
then umount the file
system
Export VG in Preview mode
you need to use -p (preview)
and -s (scan) mode while doing this. If you are not using -p the vg will
be exported (deleted). when you are using -s the vg id will be added to map
file, which will make import of vg in another node easier. Otherwise we need
find disks which is same as in this host in the other host and then import
# vgexport -p -v -s -m
/tmp/v01.map /dev/vg01
Beginning the export process on
Volume Group "/dev/vg01".
vgexport: Volume group
"/dev/vg01" is still active.
/dev/dsk/c5t0d1
/dev/dsk/c11t0d1
/dev/dsk/c7t0d1
/dev/dsk/c9t0d1
/dev/dsk/c19t0d1
/dev/dsk/c13t0d1
/dev/dsk/c15t0d1
/dev/dsk/c17t0d1
/dev/dsk/c11t0d2
/dev/dsk/c5t0d2
/dev/dsk/c7t0d2
/dev/dsk/c9t0d2
/dev/dsk/c19t0d2
/dev/dsk/c13t0d2
/dev/dsk/c15t0d2
/dev/dsk/c17t0d2
vgexport: Preview of vgexport on
volume group "/dev/vg01" succeeded.
Scp to other node
# scp /tmp/v01.map 192.168.1.2:/tmp
The authenticity of host
'192.168.1.2 (192.168.1.2)' can't be established.
RSA key fingerprint is
77:2a:6a:60:42:a9:ef:18:ba:7a:ce:a1:48:f0:a8:1a.
Are you sure you want to continue
connecting (yes/no)? yes
Warning: Permanently added
'192.168.1.2 ' (RSA) to the list of known hosts.
Password:
v01.map 100% 29 0.0KB/s 0.0KB/s
00:00
Import the vg in second node
# mkdir /dev/vg01
# mknod /dev/vg01/group c 64
0x010000 < This should be same on both nodes
# vgimport -v -s -m /tmp/v01.map
/dev/vg01
Beginning the import process on
Volume Group "/dev/vg01".
Logical volume
"/dev/vg01/mcsg" has been successfully created
with lv number 1.
vgimport: Volume group "/dev/vg01"
has been successfully created.
Warning: A backup of this volume
group may not exist on this machine.
*if you are using persistent device files then you need to add "-N" option also in vgimport
CLUSTER
CONFIGURATION
Disable automatic VG
Activation
Change AUTO_VG_ACTIVATE in
/etc/lvmrc to 0
# grep AUTO_VG_ACTIVATE= /etc/lvmrc
AUTO_VG_ACTIVATE=0
Set up trusted hosts within cluster
systems
vi /etc/cmcluster/cmclnodelist
ux-memoirs01 root
ux-memoirs02 root
Do the same in second node.
Create cluster configuration file in
/etc/cmcluster
cmquerycl -v -C cmclconfig.ascii -n
ux-memoirs01 -n ux-memoirs02
check_cdsf_group, no cdsf group
specified.
Looking for other clusters ... Done
….. <o/p truncated >---
Writing cluster data to
cmclconfig.ascii.
Edit Cluster details
CLUSTER_NAME
Time out parameters etc.
Verify configuration file
# cmcheckconf -v -C cmclconfig.ascii
Begin cluster verification...
Checking cluster file:
cmclconfig.ascii
Checking nodes ... Done
…<o/p truncated >….
Creating the cluster configuration
for cluster TEST_CLUSTER
Adding node ux-memoirs01 to cluster TEST_CLUSTER
Adding node ux-memoirs02 to cluster TEST_CLUSTER
cmcheckconf: Verification completed
with no errors found.
Use the cmapplyconf command to apply
the configuration
Apply configuration in both
nodes
# cmapplyconf -v -C cmclconfig.ascii
Begin cluster verification...
Checking cluster file:
cmclconfig.ascii
…<o/p cut >….
Marking/unmarking volume groups for
use in the cluster
Completed the cluster creation
Run the cluster
# cmruncl -v
cmruncl: Validating network
configuration...
Gathering network information
…<o/p truncated >….
Waiting for cluster to form ....
done
Cluster successfully formed.
Check the syslog files on all nodes
in the cluster to verify that no warnings occurred during startup.
Verify cluster is running
# cmviewcl
CLUSTER STATUS
TEST_CLUSTER up
NODE STATUS STATE
ux-memoirs01 up running
ux-memoirs02 up running
Package
configuration
Create package configuration
files
# mkdir /etc/cmcluster/xclock
# cd /etc/cmcluster/xclock
# cmmakepkg -v -p xclock.pkg
The package template has been
created.
This file must be edited before it
can be used.
# cmmakepkg -v -s xclock.cntl
Done.
Package control script is created.
This file must be edited before it
can be used.
Edit Package configuration
file
*Highlighted
in Bold are our values. Others default ( you may need to change them
depending on your requirement).
PACKAGE_NAME XClock
PACKAGE_TYPE FAILOVER
NODE_NAME ux-memoirs01
NODE_NAME ux-memoirs02
# The default for
"AUTO_RUN" is "YES", meaning that the package will be
# automatically started when the
cluster is started, and that, in the
# event of a failure the package
will be started on an adoptive node.
AUTO_RUN YES
RUN_SCRIPT
/etc/cmcluster/xclock/xclock.cntl
HALT_SCRIPT
/etc/cmcluster/xclock/xclock.cntl
RUN_SCRIPT_TIMEOUT NO_TIMEOUT
HALT_SCRIPT_TIMEOUT NO_TIMEOUT
NODE_FAIL_FAST_ENABLED NO
FAILOVER_POLICY CONFIGURED_NODE
FAILBACK_POLICY MANUAL
LOCAL_LAN_FAILOVER_ALLOWED YES
MONITORED_SUBNET 192.168.0.0
Edit package control script
VGCHANGE="vgchange -a e"
VG[0]="/dev/vg01"
LV[0]="/dev/vg01/mcsg";
FS[0]="/mcsg"; FS_MOUNT_OPT[0]="";
FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""
FS_TYPE[0]="vxfs"
IP[0]="192.168.10.91"
SUBNET[0]="192.168.0.0"
SERVICE_NAME[0]="xclock"
SERVICE_CMD[0]="/usr/bin/X11/xclock
-display 16.191.121.3:0.0" << ( ip of windows work station where xmanager is running)
SERVICE_RESTART[0]=""
you need to add the package ip in
/etc/hosts in both nodes
192.168.10.91 xclock.ux-memoirs.com
xclock
Verify configuration
# cmcheckconf -v -P xclock.pkg
Begin package verification...
Checking existing configuration ...
Done
….<o/p truncated >…
cmcheckconf: Verification completed
with no errors found.
Use the cmapplyconf command to apply
the configuration
Add package to cluster
# cmapplyconf -v -P xclock.pkg
Begin package verification...
Checking existing configuration ...
Done
….<o/p truncated >…
Adding the package configuration for
package XClock.
Modify the package configuration
([y]/n)? y
Completed the cluster update
copy configuration files to other
node
# scp xclock*
ux-memoirs02:/etc/cmcluster/xclock/
The authenticity of host
'ux-memoirs02 (16.118.112.92)' can't be established.
RSA key fingerprint is 77:2a:6a:60:42:a9:ef:18:ba:7a:ce:a1:48:f0:a8:1a.
Are you sure you want to continue
connecting (yes/no)? yes
Warning: Permanently added
'ux-memoirs02' (RSA) to the list of known hosts.
Password:
xclock.cntl 100% 73KB 73.4KB/s
73.4KB/s 00:00
xclock.pkg 100% 35KB 34.6KB/s
73.4KB/s 00:00
Verify cluster status
# cmviewcl
CLUSTER STATUS
TEST_CLUSTER up
NODE STATUS STATE
ux-memoirs01 up running
ux-memoirs02 up running
UNOWNED_PACKAGES
PACKAGE STATUS STATE AUTO_RUN NODE
XClock down halted disabled unowned
Run the package
# cmrunpkg XClock
Running package XClock on node
ux-memoirs01
Successfully started package XClock
on node ux-memoirs01
cmrunpkg: All specified packages are
running
verify cluster status
# cmviewcl
CLUSTER STATUS
TEST_CLUSTER up
NODE STATUS STATE
ux-memoirs01 up running
PACKAGE STATUS STATE AUTO_RUN NODE
XClock up running disabled
ux-memoirs01
NODE STATUS STATE
ux-memoirs02 up running
Enable AUTO_RUN/Fail over
# cmmodpkg -e XClock
cmmodpkg: Completed successfully on
all packages specified
Possible errors
# cmrunpkg XClock
Unable to run package XClock on node
ux-memoirs01. Node is not eligible.
cmrunpkg: Unable to start some
package or package instances.
# cmviewcl -v -p XClock
UNOWNED_PACKAGES
PACKAGE STATUS STATE AUTO_RUN NODE
XClock
down failed enabled
unowned
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS NODE_NAME NAME
Subnet up ux-memoirs01 192.168.0.0
Subnet up ux-memoirs02 192.168.0.0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary
up disabled
ux-memoirs01
Alternate
up disabled
ux-memoirs02
Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style legacy
Priority no_priority
We are enabling per package wise
which all are the nodes can run/receive package.
# cmmodpkg -n ux-memoirs01 -n
ux-memoirs02 -e XClock
cmmodpkg: Completed successfully on
all packages specified
If AUTO_RUN is enabled above command
will not work. Disable auto_run and run above command. Then enable auto_run
Once done it will be like this
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary
up enabled
ux-memoirs01 (current)
Alternate
up enabled
ux-memoirs02
How package is starting
########### Node
"ux-memoirs01": Starting package at Fri Jan 27 02:56:35 EST 2012
###########
Jan 27 02:56:35 - Node
"ux-memoirs01": Activating volume group /dev/vg01 with exclusive
option.
Activated volume group in Exclusive
Mode.
Volume group "/dev/vg01"
has been successfully changed.
Jan 27 02:56:35 - Node
"ux-memoirs01": Checking filesystems:
/dev/vg01/mcsg
/dev/vg01/rmcsg:file system is clean
- log replay is not required
Jan 27 02:56:36 - Node
"ux-memoirs01": Mounting /dev/vg01/mcsg at /mcsg
Jan 27 02:56:36 - Node
"ux-memoirs01": Adding IP address 192.168.10.91 to subnet 192.168.0.0
Jan 27 02:56:36 - Node
"ux-memoirs01": Starting service xclock using
"/usr/bin/X11/xclock -display
16.192.123.3:0.0"
########### Node
"ux-memoirs01": Package start completed at Fri Jan 27 02:56:36 EST
2012 ###########
Adding monitoring script and custom start/stop
scripts.
we are modifying the xclock to
run/stop using custom commands
Create monitoring script
# vi
/etc/cmcluster/xclock/xclock.mon
#!/usr/bin/sh
LOG="/etc/cmcluster/xclock/xclock.cntl.log"
echo "Now entering the xclock
package monitor \c" >> $LOG
echo "script on $(hostname) at
$(date)." >> $LOG
while true
do
if ps -ef | grep -v grep | grep -q
"/usr/bin/X11/xclock"
then
echo "Package xclock apparently
ok \c"
echo "on $(hostname) at
$(date)."
sleep 10
else
echo "Package xclock failed at
$(date) \c"
echo "from node
$(hostname)."
exit
fi
done >> $LOG
chmod 755
/etc/cmcluster/xclock/xclock.mon
Modify control script
SERVICE_CMD[0]="/etc/cmcluster/xclock/xclock.mon"
SERVICE_RESTART[0]=" -r
0"
Start up
function customer_defined_run_cmds
{
# ADD customer defined run commands.
/usr/bin/X11/xclock -update 1 -bg
blue -display 16.192.123.3:0.0&
s test_return 51
}
Shut down
function customer_defined_halt_cmds
{
# ADD customer defined halt
commands.
LOG="/etc/cmcluster/xclock/xclock.cntl.log"
echo "Now entering the
customer_defined_halt_cmds \c" >> $LOG
echo "on $(hostname) at
$(date)." >> $LOG
if ps -ef | grep -v grep | grep -q
"/usr/bin/X11/xclock"
then
echo "Found the xclock process
at $(date)." >> $LOG
echo "Killing xclock process at
$(date)." >> $LOG
kill -9 $(ps -ef | grep -v grep |
grep "/usr/bin/X11/xclock"| cut -c10-14)
else
echo "Note: In
customer_defined_halt_cmds \c. >> $LOG
echo .on $(hostname), and could not
find the \c" >> $LOG
echo "xclock process at
$(date)." >> $LOG
fi
test_return 52
}
copy control script and monitoring
script to other node
# scp xclock.cntl xclock.mon
ux-memoirs02:/etc/cmcluster/xclock/
Password:
xclock.cntl 100% 74KB 74.1KB/s
74.1KB/s 00:00
xclock.mon
Then start package as usual.