Monday, May 30, 2011

Build RPM from source file

Traditionally, Installing  from source file need to go through several steps: ./configure;make;make install;make clean. RPM can automate the process by SPEC file. Once binary RPM package is generated, it can be easily distributed to other servers.
This article use hping3 source file as an example to demonstrate the basics to build RPM. For further information, please refer to
Install rpmbuild
$yum install rpm-build
RPM Macros
#Various RPM Macros locations
/usr/lib/rpm/macros #Global default macros
/etc/rpm/micros   #Global user defined macros
~/.rpmmacros  #per-user defined  macros
rpmbuild --define 'macro_name value '   #define at run time
#display a macro
$ rpm --eval %{_vendor}
#display all macros
rpm --showrc
Setup build environment 
#It is preferred to use a non-root user  to to control build
$useradd builder
$ echo '%_topdir    /home/builder/redhat'  > .rpmmacros
$ mkdir -p /home/builder/redhat/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
Building RPM  involves following steps:
1. Preparing for building, including unpacking the sources
2. Building (compiling)
3. Installing the application or library
4. Cleaning up
5. Customized scripts for pre-install,post-install, pre-uninstall, post-uninstall
6. List files to be packaged into RPM
7. Add changelog
8. GPG sign package
The first 7 steps are controlled by SPEC  file
##This spec file use hping3 source file as an example 
[builder]$ cat  /home/builder/redhat/SPECS/hping3.spec
%define name hping3
%define version 3.0
Name: %{name}
Version: %{version}
Release: 0
License: GPL
##Pick a name in  /usr/share/doc/rpm-*/GROUPS
Group: Applications/System
##All  source files should be packed  under a dir named:    %{name}-%{version}   e.g. ./hping3-3.0/*
##Packed file name should be  %{name}-%{version}.XX  e.g. hping3-3.0.tar.gz
Source: hping3-3.0.tar.gz
Patch0: hping3.patch
#Patch1: 2.patch
#PreReq: unzip
##libpcap is required package for hping to work
Requires: libpcap
##gcc and libpcap-devel are required duing compling
BuildPreReq: gcc libpcap-devel
##BuildRoot is staging area that looks like the final installation directory 
##all final files are copied to BuildRoot
BuildRoot: %{_tmppath}/%{name}-root
Summary: hping3 is a network tool.
hping3 is a network tool able to send custom TCP/IP
packets and to display target replies like ping do with
ICMP replies.
##1. Prepare
####%setup will go to ~/redhat/BUILD dir and unpack soure files
%setup -q
##2. Build
%configure --no-tcl
##3. Install
mkdir -p $RPM_BUILD_ROOT{/usr/sbin,/usr/share/man/man8}
install -m 755 hping3   $RPM_BUILD_ROOT/usr/sbin/
(cd $RPM_BUILD_ROOT/usr/sbin; ln -s hping3 hping2 ; ln -s hping3 hping )
%{__gzip}  ./docs/hping3.8&& \
install -m 644 ./docs/hping3.8.gz $RPM_BUILD_ROOT/usr/share/man/man8
##4. Clean up
make clean
##-5. customized scripts; view all scripts of a rpm file "rpm -q --scripts file.rpm"
####user is not needed, demonstration purpose only
useradd hping
chage -M -1 hping
#### $1=0 remove; $1=1 first install; $1>=2 upgrade
if [ $1 = 0 ]; then
userdel -r hping
##6. list files to be packed to RPM
%attr(755,root,root) /usr/sbin/hping*
%doc /usr/share/man/man8/hping3.8.gz
##7. changlog
#### date Format:  date +'%a %b %d %Y'
* Mon May 30 2004   antirez <email@com>
- First public release of hping3
Test each stage by rpmbuild
$rpmbuild --help
Build options with [ <specfile> | <tarball> | <source package> ]:
-bp                           build through %prep (unpack sources and apply
patches) from <specfile>
-bc                           build through %build (%prep, then compile)
from <specfile>
-bi                           build through %install (%prep, %build, then
install) from <specfile>
-bl                           verify %files section from <specfile>
-ba                           build source and binary packages from
-bb                           build binary package only from <specfile>
GPG Sign RPM file
Sign a package to prove source identity  of the file
#Create gpg key pair,remmber the keypass for private key, it will be asked when signing package
$gpg --gen-key
#Tell rpm which gpg key to use
$ cat ~/.rpmmacros
%_topdir    /home/builder/redhat
%_signature gpg
%_gpg_name rpm test <rpm.test@com>
#Sign RPM with GPG private key
#Before RPM created, use rpmbuid --sign spec-file
#After RPM created, use rpm --resign
$rpm --resign /home/builder/redhat/RPMS/x86_64/hping3-3.0-0.x86_64.rpm
#Export GPG pulic key
$gpg --export -a > /tmp/
#Before import, signature "NOT OK"
$rpm --checksig hping3-3.0-0.x86_64.rpm
hping3-3.0-0.x86_64.rpm: (SHA1) DSA sha1 md5 (GPG) NOT OK (MISSING KEYS: GPG#31f8d18a)
#Import GPG pub key
$rpm --import /tmp/
#after import,  signature "OK"
$ rpm --checksig hping3-3.0-0.x86_64.rpm
hping3-3.0-0.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
#list all imported GPG keys
$ rpm -qa gpg*

Saturday, May 28, 2011

Passed 2/5 RHCA: EX436 Clustering and Storage Management

EX436 is easier than EX442(System Monitoring and Performance Tuning), because testing subjects are less and the method of testing is just straight forward setup and configuration, unlike EX442, which requires extensive analysis and calculation.
I didn't pay attention to RHEL release during exam, But,RHEL 5.4,is showed in my exam result. Although GFS2 is default starting from RHEL 5.3, GFS is the subject to be tested. I think it won't  be changed until RHEL 6 courseware comes out.

My blog post for EX436 study notes

GFS(Global File System) quickstart

RHCS(Red Hat Cluster Suite) quorum disk

RHCS(Red Hat Cluster Suite) I/O fencing using SNMP IFMIB

Do we really need to set partition type to fd(Linux auto raid) for Linux software RAID?

Sunday, May 22, 2011

Subversion Quickstart

This short tutorial is intended for new users to grasp subversion quickly.
Subversion is a open source version control system based on Copy-Modify-Merge  model rather than lock-Modify-Unlock model.
It is primarily used for software development, which allows developers to modify files and directories concurrently (no locking) and switch between versions easily.  In system administration world, it could be used to track system changes and roll back changes.
Fundamental Concepts(don't skip):
 - The Repository
Repository is a central store for all versions of data, subversion server configuration files are also located in the repository.
Once repository is created, you are NOT supposed to visit the repository directory other than changing subversion server configuration
You should modify versions of data in a “working copy” of the repository data.
The repository can be accessed in a number ways:
file:/// Direct repository access (on local disk)
http:// Access via WebDAV protocol to Subversion-aware Apache server
https:// Same as http://, but with SSL encryption.
svn:// Access via custom protocol to an svnserve server
svn+ssh:// Same as svn://, but through an SSH tunnel
To setup svnserv server to offer svn:// access over network, you need to enable authentication and authorization  by modifying  repository-path/conf/{svnserve.conf,passwd,authz} then start “svnserve -d -r repository-path”
- The Working copy directory
A working copy is a subset of repository data. To creating a working copy, use “svn checkout” to checkout the root or sub directory of repository.
You modify data in  “working copy” NOT  in repository directory
Install subversion 
Most Linux distros include subversion by default. to Install in Centos:
$yum install subversion
$rpm -qa | grep subv
Create  a subversion repository
#It is where all data are saved, you should have enough space
$svnadmin create /var/svn
#svnadmin populated the directory with following structure
#conf is the location of server configuration files
#db is the location of your versions of data
$ls /var/svn
conf  db  format  hooks  locks  README.txt
#It is ideal to create individual directory for different project.
#-m is to give a description of this operation, later, it can be view with “svn log”
#This transaction is recorded as revision 1
#the command is svn not svnadmin
#svnadmin and svnlook are server side commands, They always action on a PATH  NOT a URL like file:///”
$svn mkdir file:///var/svn/proj_1 -m "test mkdir"
Committed revision 1.
#Verify the sub dir is created
$svn list -v  file:///var/svn
3 root                  May 22 11:33 ./
3 root                  May 22 11:33 proj_1/
Import data  into the repository
#let's import /etc/sysconfig  into the repository
#import is used to to populate repository for the first time
#adding new files later need “svn add” command in a “working copy”
$ svn import /etc/sysconfig/  file:///var/svn/proj_1 -m "test import"
Adding         /etc/sysconfig/irda
Adding         /etc/sysconfig/kernel
Adding         /etc/sysconfig/syslog
Adding         /etc/sysconfig/snmpd.options
Committed revision 2.
#let's view the imported files in repository
#something wrong? Where are those files? even dir “proj_1” doesn't exist
#let me repeat, you are supposed to modify data in repository directly,  Do this in a “working copy”
$ls  /var/svn
conf  db  format  hooks  locks  README.txt
# if you are curious about where the data is stored, all data are “packed” in a binary file
$ strings /var/svn/db/revs/0/2 | grep $(hostname)
# or view “svn ls”  and “svn cat”
svn cat file:///var/svn/proj_1/network
Create a working copy
#create a working copy by checkout proj_1,  The target dir proj_1 will be automatically created, of course, you can name it differently 
$cd /root/svn
$svn checkout file:///var/svn/proj_1 proj_1
A    proj_1/irda
A    proj_1/kernel
A    proj_1/syslog
#I want to add /etc/hosts to repository
#any operations in “working copy” should use subversion-aware commands e.g “svn mkdir, svn add, svn mv, svn cp”
$ cd  /root/svn/proj_1
$svn mkdir  ./etc
$ cp /etc/hosts ./etc
$ svn add ./etc/hosts
A         etc/hosts
#commit the changes to repository
$svn commit -m "added hosts file"
Adding         etc
Adding         etc/hosts
Transmitting file data .
Committed revision 3.
#svnlook shows the latest version is 3
$svnlook  youngest  /var/svn
$svn log /root/svn/proj_1/
r3 | root | 2011-05-22 11:33:03 +1000 (Sun, 22 May 2011) | 1 line
added hosts file
r2 | root | 2011-05-22 11:29:17 +1000 (Sun, 22 May 2011) | 1 line
test import
r1 | root | 2011-05-22 11:29:05 +1000 (Sun, 22 May 2011) | 1 line
test mkdir
$svn diff -r 2:3  /root/svn/proj_1/
Index: /root/svn/proj_1/etc/hosts
--- /root/svn/proj_1/etc/hosts  (revision 0)
+++ /root/svn/proj_1/etc/hosts  (revision 3)
@@ -0,0 +1,8 @@
Rollback to previous versions
That is where subversion shines, no matter how many changes you have made, one simple command can switch versions
$ svn update -r 2 /root/svn/proj_1/
D    /root/svn/proj_1/etc
Updated to revision 2.
$ ls ./etc
ls: ./etc: No such file or directory
$svn update -r 3 /root/svn/proj_1/
A    /root/svn/proj_1/etc
A    /root/svn/proj_1/etc/hosts
Updated to revision 3.
$ ls ./etc

Tuesday, May 17, 2011

GFS (Global File System) quickstart

What is GFS?
GFS allow all nodes to have direct CONCURRENT write access to the same shared BLOCK storage.
For local file system e.g ext3, A shared BLOCK storage can be mounted in multiple nodes, but CONCURRENT write access is not allowed
For NFS, the CONCURRENT write access is allowed, but it is not direct BLOCK device, which introduce delay and another layer of failure.
GFS requirements:
- A shared block storage (iSCSI, FC SAN etc.. )
- RHCS (Red hat Cluster suite) (although GFS can be mounted in standalone server without cluster, it is primarily used for testing purpose or recovering data when cluster fails)
- RHEl 3.x onwards (RHEL derivatives: Centos/Fedora), it should work in other Linux distributions, since GFS and RHCS have been open sourced.
GFS specifications:
- RHEL 5.3 onwards use GFS2
- RHEl 5/6.1 supports maximum 16 nodes
- RHEL 5/6.1 64 bit supports maximum file system size of 100TB (8 EB in theory)
- Supports: data and metadata journaling, quota, acl, Direct I/O, growing file system online, dynamic inodes (convert inode block to data block) 
- LVM snapshot of CLVM under GFS  is NOT yet supported.
GFS components:
RHCS components: OpenAIS, CCS, fenced, CMAN and CLVMD (Clustered LVM)
GFS specific component: Distributed Lock Manager (DLM)
Install RHCS and GFS  rpms
Luci (Conga project) is the easiest way to install and configure RHCs and GFS.
#GFS specific packages:
#RHEL 5.2 or lower versions 
$yum install gfs-utils    kmod-gfs 
#RHEL 5.3 onwards, gfs2 module is part of kernel 
$yum install gfs2-utils   
Create GFS on LVM
You can create GFS on raw device, but LVM is recommended for consistent device names and the ability to extend device
#Assume you have setup and tested a working RHCS
#Edit cluster lock type in /etc/lvm/lvm.conf on ALL nodes

#Create PV/VG/LV as if in standalone system ONCE in any ONE of the nodes

#Start Cluster and clvmd on ALL nodes 
#Better use luci GUI interface to start whole cluster 
$Service cman start
$Service rgmanager start
$servcie clvmd start

#Create GFS ONCE in any ONE of the nodes
# -p lock_dlm is required in cluster mode. Lock_nolock is for standalone system
# -t cluster1:gfslv      ( Real cluster-name: arbitrary  GFS name )
# Above information is stored in GFS superblock, which can be changed with “gfs_tool sb” without re-initializing GFS e.g change lock type: "gfs_tool sb /device proto lock_nolock" 
#-j 2: the number of journals, minimum 1 for each node. The default journal size is 128Mib, can be overridden by -J
#additional journal can be added with gfs_jadd
gfs_mkfs -p lock_dlm -t cluser1:gfslv -j 2 /dev/vg01/lv01

#Mount GFS in cluster member by /etc/fstab
#put GFS mount in /etc/fstab in ALL nodes
#Cluster service can mount GFS without /etc/fstab after adding GFS as resource, but It can only mount on one node (the active node).  Since GFS is supposed to be mounted on all nodes at the same time. /etc/fstab is a must, GFS resource is optional.
#GFS mount options: lockproto, locktable are optional, mount can obtain the information from superblock automatically
$cat /etc/fstab
/dev/vg01/lv01          /mnt/gfs                gfs     defaults 0 0

#Mount all GFS mounts
service gfs start
GFS command lines
####Check GFS super block 
#some values can be changed by “gfs_tool sb”
$gfs_tool sb /dev/vg01/lv01 all
sb_bsize = 4096
sb_lockproto = lock_dlm
sb_locktable = cluster1:gfslv01
####GFS tunable parameters 
#view parameters
gfs_tool gettune <mountpoint>
#set parameters 
#The parameters don’t persist after re-mount, You can customize /etc/init.d/gfs to set tunable parameters on mounting
gfs_tool settune <mountpoint>

####Performance related parameters
#like other file system, you can disable access time update by mount option “noatime”
#GFS can also allow you to control how often to update access time
$gfs_tool gettune /mnt/gfs | grep atime_quantum   
atime_quantum=3660          #in secs

#Disable quota, if not needed
#GFS2 remove the parameter and implement it in mount option “quota=off”
$gfs_tool settune /mnt/gfs quota_enforce 0

#GFS direct I/O
#Enable directI/O for database files, if DB has its own buffering mechanism to avoid “double” buffering 
$gfs_tool setflag directio /mnt/gfs/test.1     #file attribute
$gfs_tool setflag inherit_directio /mnt/gfs/db/     #DIR attribute
$gfs_tool clearflag directio /mnt/gfs/test.1              #remove attribute
$gfs_tool stat  inherit_directio /mnt/gfs/file     # view attribute

#enable data journal for very small files
#disable data journal for large files
$gfs_tool setflag inherit_jdata  /mnt/gfs/db/     #Enable  data journal (only metadata  has journal  by default) on a dir. (if operate on a file, the file must be zero size)

###GFS backup, CLVM doesn't support snapshot
$gfs_tool freeze /mnt/gfs          #change GFS to read-only (done once in any one of the nodes)
$gfs_tool unfreeze /mnt/gfs

###GFS repair 
#after unmount GFS on all nodes
$gfs_fsck  -v /dev/vg01/lv01         # gfs_fsck -v -n /dev/vg01/lv01 : -n answer no to all questions, inspect gfs only without making changes

GFS implementation scenarios:
GFS’s strength is the ability to do concurrent write to the same block device, It make it possible for Active-Active cluster nodes to write to the same block device, but there are few such cases in real life.
In Active-Active cluster nodes (all nodes perform the same task), RHCS can’t do load balancing itself, it requires external load balancer
 - Database server cluster: In theory, all nodes can write to the same DB file concurrently, However, the performance will be degraded, because all nodes try to lock the file via Distributed Lock Manager.  You can assign different task to cluster nodes to write to different DB file, e.g. node-A run DB-A and node-B run DB-B, but this can be done, without GFS, by mounting  ext3 on individual iSCSI/FC disk.
GFS doesn’t lose to ext3 in above scenario, but its lack of LVM snapshot of in GFS‘s CLVMD kills my inspiration of using DB on GFS
 - Application server cluster: e.g. Apache, Jboss server cluster. It is the true that GFS can simply application package deployment because all nodes can share the same application package binaries. But if you only use two nodes cluster, deploying application twice is not big hassle. Maintaining single copy of application binaries is convenient, but at risk of single point of failure.
 - NFS Cluster: Because NFS is I/O bound, Why would you run Active-Active NFS cluster with CPU/memory resource in nodes are not being fully utilized? 

Tuesday, May 10, 2011

LVM2: device filter and LVM metadata restore

Customize LVM device filter to get rid of the annoying “/dev/cdrom: open failed” warning
##/dev/cdrom: open failed warning
$pvcreate /dev/sdb1
/dev/cdrom: open failed: Read-only file system
$ vgcreate vg01 /dev/sdb1
/dev/cdrom: open failed: Read-only file system
##The error because LVM scan all device files by default, you can exclude some device files by device filters
##File /etc/lvm/cache/.cache contains the device file names scanned by LVM
$ cat /etc/lvm/cache/.cache
persistent_filter_cache {
##Edit /etc/lvm/lvm.conf, Change default filter  
filter = [ "a/.*/" ]
filter = [ "r|/dev/cdrom|","r|/dev/ram*|" ]
##You need to delete the cache file or ran vgscan to regenerate the file
$rm /etc/lvm/cache/.cache   OR vgscan
LVM metadata backup and restore 
LVM record every LVM VG and LV metadata operation and save it to /etc/lvm/backup automatically, old version backup files are archived to /etc/lvm/archive.
The backup file can be used to rollback LVM metadata changes, for example, if you have removed the VG/PV or even re-initialize disk with pvcreate, Don't panic,as long as file system was not re-created, you can use vgcfgrestore to restore all the data.
The following is to demonstrate how to recover a LV after it is completed destroyed from PV level (pvremove)
1.Create test LV and write some data
$pvcreate  /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
$vgcreate vg01  /dev/sdb1 /dev/sdb2
Volume group "vg01" successfully created
$ lvcreate -L100M -n lv01 vg01
Logical volume "lv01" created
$ mkfs.ext3 /dev/vg01/lv01
$ mount /dev/vg01/lv01 /mnt/
$cp /etc/hosts /mnt/
$ ls /mnt/
hosts  lost+found
2.Destroy LV,VG,and PV
$vgremove vg01
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed
#VG is removed and PV was also wiped out
$ pvcreate /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
3.Lets recover the LV and the data
##Find out the backup file to restore from
$vgcfgrestore -l vg01
file:         /etc/lvm/archive/
VG name:      vg01
Description:  Created *before* executing 'vgremove vg01'
Backup time:  Tue May 10 15:41:31 2011
##first attempt failed, because PV UUID is changed
$ vgcfgrestore -f /etc/lvm/archive/ vg01
Couldn't find device with uuid 'pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB'.
Couldn't find device with uuid 'J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W'.
Cannot restore Volume Group vg01 with 2 PVs marked as missing.
Restore failed.
##Find old UUID
$ grep -B 2 /dev/sdb /etc/lvm/archive/
pv0 {
id = "pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB"
device = "/dev/sdb1"    # Hint only
pv1 {
id = "J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W"
device = "/dev/sdb2"    # Hint only
##Recreate PV with the old UUID
$ pvcreate -u pVf1J2-rAsd-eWkD-mCJc-S0pc-47zc-ImjXSB /dev/sdb1
Physical volume "/dev/sdb1" successfully created
$ pvcreate -u J14aVl-mbuj-k9MM-63Ad-TBAa-S0xF-VElV2W  /dev/sdb2
Physical volume "/dev/sdb2" successfully created
##run vgcfgrestore again
$ vgcfgrestore -f /etc/lvm/archive/ vg01
Restored volume group vg01
##data was also recovered
$ mount /dev/vg01/lv01 /mnt/
mount: special device /dev/vg01/lv01 does not exist
$ lvchange -a y vg01/lv01
$ mount /dev/vg01/lv01 /mnt/
$ cat /mnt/hosts       localhost

Saturday, May 7, 2011

RHCS(Red Hat Cluster Suite) quorum disk

The last post "RHCS I/O fencing" is about dealing with split-brain situation, in which cluster members lost heartbeat communication and each believe it is legitimate to write data to the shared storage.
Methods to deal with split-brain situation:
1. Redundant heartbeat path
network port communication plus serial port communication
2. I/O fencing
Remaining nodes separate failed node from its storage either by shutdown/reboot power port or storage port
3. Quorum disk
Quorum disk is a kind of I/O fencing, but the reboot action is executed by failed node's own quorum daemon. It also has additional feature: contributing vote to cluster. if you want the last standing node to keep the multiple-nodes cluster  running, quorum disk appears to be the only solution.
RHCS (Red Hat Cluster Suite) Quorum disk facts
- A shared block device (SCSI/iSCSI/FC..), Device size requirement is approximately 10MiB
- Supports maximum 16 nodes, nodes id must be sequentially ordered
- Quorum disk can contribute  votes. In multiple nodes cluster, together with quorum vote, the last standing node can still keep the cluster running
- single node votes+1 <=Quorum's disk vote < nodes total votes
- The failure of the shared quorum disk won’t result in cluster failure, as long as Quorum's disk vote < nodes total votes
- each node write its own health information in its own region, the health is determined by external checking program such as "ping"
Setup Quorum disk
#initialise quorum disk once in any node 
mkqdisk -c /dev/sdx -l myqdisk 
Add quorum disk to cluster 
Use luci or system-config-cluster to add quorum disk, following is the result xml file
<clusternode name="" nodeid="1" votes="2">
<clusternode name="" nodeid="2" votes="2">
<clusternode name="" nodeid="3" votes="2">
#expected votes =9=(nodes total votes + quorum disk votes) = (2+2+2+3)       
<cman expected_votes="9"/> 
#Health check result is writen to quorum disk every 2 secs
#if health check fails over 5 tko, 10 (2*5) secs, the node is rebooted by quorum daemon
#Each heuristic check is run very 2 secs and earn 1 score,if shell script return is 0
<quorumd interval="2" label="myqdisk" min_score="2" tko="5" votes="3">
<heuristic interval="2" program="ping -c1 -t1" score="1"/>
<heuristic interval="2" program="ping -c1 -t1" score="1"/>
Start quorum disk daemon
The daemon is also one of daemons automatically started by cman
service qdiskd start
Check quorum disk information
$ mkqdisk -L -d
mkqdisk v0.6.0
Magic:                eb7a62c2
Label:                myqdisk
Created:              Sat May  7 05:56:35 2011
Kernel Sector Size:   512
Recorded Sector Size: 512
Status block for node 1
Last updated by node 1
Last updated on Sat May  7 15:09:37 2011
State: Master
Flags: 0000
Score: 0/0
Average Cycle speed: 0.001500 seconds
Last Cycle speed: 0.000000 seconds
Incarnation: 4dc4d1764dc4d176
Status block for node 2
Last updated by node 2
Last updated on Sun May  8 01:09:38 2011
State: Running
Flags: 0000
Score: 0/0
Average Cycle speed: 0.001000 seconds
Last Cycle speed: 0.000000 seconds
Incarnation: 4dc55e164dc55e16
Status block for node 3
Last updated by node 3
Last updated on Sat May  7 15:09:38 2011
State: Running
Flags: 0000
Score: 0/0
Average Cycle speed: 0.001500 seconds
Last Cycle speed: 0.000000 seconds
Incarnation: 4dc4d2f04dc4d2f0
The cluster is still running with last node standing
Please note Total votes=quorum votes=5=2+3, if quorum disk vote is less than  (node votes+1), the cluster  wouldn’t have  survived
$cman_tool status
Nodes: 1
Expected votes: 9
Quorum device votes: 3
Total votes: 5
Quorum: 5