Sunday, June 26, 2011

Spacewalk provision Red Hat Linux by PXE kickstart

Spacewalk is an open source (GPLv2) Linux systems management solution. It is the upstream community project from which the Red Hat Network Satellite product is derived.
What can Spacewalk do:
- YUM repository Server, which is connected by client via yum-rhn-plugin
- Provision, kickstart, physical or virtual systems (using cobbler)
- Manage and deploy configuration files, software to a group of servers
- Monitor your systems (CPU, disk space etc .. ), Inventory your systems (hardware and software information)
OS supported by Spacewalk:
- Red Hat Linux derivatives (Centos, Fedora, Scientific Linux) and Solaris
- Limited support for SUSE Linux (AutoYaST Support is planned in V1.5, https://fedorahosted.org/spacewalk/roadmap )
- Experimental support for Debian (https://fedorahosted.org/spacewalk/wiki/Deb_support_in_spacewalk )
This post is not a complete guide for spacewalk install and administartion, its goal is to PXE kickstart a pysical server and have it registered to spacewalk server when kickstart complete. Kickstarting a server is easy but having it registered to spacewalk server needs tweaking.
Useful document for Spacewalk: 
http://wiki.centos.org/HowTos/PackageManagement/Spacewalk
https://fedorahosted.org/spacewalk/wiki/UserDocs
http://docs.redhat.com/docs/en-US/Red_Hat_Network_Satellite/index.html
This post demonstrate spacewalk 1.4 kickstarting Centos 5.5 i386
Steps Summary:
- Setup PXE boot server environment
- Create OS base channel
- Create child channel(tools channel)
- Create distribution tree
- Create activation key
- Create kickstart profile
Setup PXE boot server environment
Setup tftp and dhcpd, refer to http://honglus.blogspot.com/2011/06/setup-pxe-boot-server-for-linux-server.html
However, you don't bother editing pxelinux.cfg/default, which will be managed by spacewalk.
Create OS base channel
Navigate to: Channels | Manage software channels | Create new channel
Channel label is significant, It is channel label, not channel name, is referred for channel operations.
For GPG key section, refer to “GPG Sign RPM file” http://honglus.blogspot.com/2011/05/build-rpm-from-source-file.html
The GPG pub key need to copied to “/var/www/html/pub”, which can be downloaded by http://Server/pub/MY-GPG-FILE-NAME
#Import OS rpms to OS Base channel
#Before importing rpms, it is recommended to resign all rpms with your own GPG key “ rpm –resign *.rpm”, otherwise you need to import the rpm's original GPG key to all  clients.
$spacewalk-repo-sync  -c channel-label –-url  http://mirror.centos.org/centos/5/os/i386/
# You can also import rpms in installation media by –url file:///media/cdrom
Create child channel(tools channel)
Create a child channel for the Base channel created in last step, using same GPG information
#import spacewalk client rpms to the child channel
$spacewalk-repo-sync  -c child-channel-label  –-url  http://spacewalk.redhat.com/yum/1.4-client/RHEL/5/i386/
#python-ethtool doesn't exist in above sites, you need to download it manually from EPEL repository http://fedoraproject.org/wiki/EPEL  
#import a single rpm  to the child channel
$rhnpush  -c  child-channel-label  -u satadmin python-ethtool*.rpm 
Create distribution tree
Distribution hold installation files e.g. “images/stage2.img”, which can't be imported to spacewalk channel
#Create distribution tree path
$mkdir -p /var/distro-trees/centos-32-5.5
#Copy everything in installation media except for rpm files to the dir
#rpm files will be retrieved from channels
$cd /media/cdrom; find . ! -path "./CentOS/*"   | cpio -pvd /var/distro-trees/centos-32-5.5
Navigate to: Systems | Kickstart | Distributions | Create new distribution
dist2

Create activation key
Activation key is bound to base channel and entitlements, it is used by client to register to spacewalk without password authentication.
Navigate to: Systems | Activation Keys | Create new Key
Select the base channel and enable provisioning add-on entitlements
In child channels, select the child channel.
Optionally, if you want to pull configuration file e.g /etc/ntp.conf during kickstart, you need to create configuration channel and bind the activation key
Create kickstart profile
Navigate to: Systems | Kickstart | Create new kickstart profile
ks

In operating systems, select base channel and child channel
ks2
In Software,enter the  following packages in addition to @ Base
rhn-check
rhn-setup
yum-rhn-plugin
python-ethtool
python-dmidecode
rhncfg-client
rhncfg-actions
#Above packages provide rhnreg_ks and rhn_check to register to spacewalk during kickstart, otherwise kickstart postscript will encounter errors:
/tmp/ks-script-KOlpXy: line 128: rhnreg_ks: command not found
/tmp/ks-script-KOlpXy: line 134: rhn_check: command not found
#You can also write your own snipplets in “/var/lib/cobbler/snippets” to add packages dynamically.

In Activation Keys, select the activation key
Once kickstart profile is created, some entries are added to pxe configuration file
/tftpboot/pxelinux.cfg/default
Power on the server to be provisioned, when kickstart completed, It should be registered and appeared in spacewalk.

Saturday, June 18, 2011

Setup PXE Boot Server for Linux Server Provisioning

PXE is Preboot eXecution Environment, for PXE to work, NIC and BIOS must both support PXE (Virtualbox pcnet type adapter supports pxe boot)
PXE boot server components
- DHCP Server    #assign ip address and redirect to tftp Server
- tftp Server         #download boot loaders and configuration file
- syslinux      #provides stage1 boot loader pxelinux.0, which  is installed in boot  server, independent of the OS to be provisioned
The PXE boot process
1. NIC requests DHCP information (DHCP DHCPDISCOVER to port 67/UDP)
2. DHCP server provides bootloader name and IP of tftp server
#relevant DHCP config
nextsever "172.16.1.10";  
filename "pxelinux.0";
3. NIC uses tftp to fetch bootloader into RAM(tftp tftp-server -c get pxelinux.0)
4. BIOS executes bootloader
5. Bootloader uses tftp to find and retrieve configuration file in following order:
        [5.1] MAC address using hex and dashes, prefaced with ARP type code
        [5.2] IP address expressed in hex
#Convert decimal to hex by gethostip command
$gethostip 192.0.2.91
192.0.2.91 192.0.2.91 C000025B
[5.3]Strips one digit of hex IP at a time from the right-hand side until file is found
[5.4]Last attempt is default
As an example, if the boot file name is /tftpboot/pxelinux.0, the Ethernet MAC address is 88:99:AA:BB:CC:DD and the IP  address 192.0.2.91, it will try:
/tftpboot/pxelinux.cfg/01-88-99-aa-bb-cc-dd
/tftpboot/pxelinux.cfg/C000025B
/tftpboot/pxelinux.cfg/C000025
... 
/tftpboot/pxelinux.cfg/C
/tftpboot/pxelinux.cfg/default
6. Bootloader load kernel: vmlinuz and initrd.img defined in the configuration file retrieved.
Install PXE Boot Server components
The setup procedure is demonstrated in Centos 5
$yum install tftp dhcp syslinux
tftp  configuration is  /etc/xinetd.d/tftp  and controlled by  /etc/init.d/xinetd
Prepare  tftp directory structure  and populate initial files
$mkdir -p /tftpboot/{pxelinux.cfg,centos-i686-5.5}
pxelinux.cfg                #The directory for client OS configuration files
centos-i686-5.5           #An optional directory to hold vmlinuz, initrd.img specific to a Linux release 

#find pxelinux.0 on PXE boot Server and copy it to tftpboot
$rpm -ql syslinux | grep pxelinux.0
/usr/lib/syslinux/pxelinux.0
$cp /usr/lib/syslinux/pxelinux.0 /tftpboot/
$cp /usr/lib/syslinux/menu.c32   /tftpboot/

#Copy vmlinuz and initrd.img in installation media for the client OS to be provisioned
$cp /media/cdrom/images/pxeboot/{initrd.img,vmlinuz} /tftpboot/centos-i686-5.5
Create PXE configuration file for client OS
#Derive the configuration file name from the ip to be assigned to client OS
$gethostip 172.16.1.128
172.16.1.128 172.16.1.128 AC100180

#Edit config file
#reference: /usr/share/doc/syslinux*/syslinux.doc
#sample config: /media/cdrom/isolinux/isolinux.cfg
$vi /tftpboot/pxelinux.cfg/AC100180
default linux
prompt 1
#timeout in units of 1/10 s.
timeout 20
#dsplay boot.msg
label linux
kernel centos-i686-5.5/vmlinuz
append initrd=centos-i686-5.5/initrd.img ks=http://172.16.1.10/pxe/centos.ks ksdevice=link

#if no config for the host defined, default to boot from none-pxe media
$vi /tftpboot/pxelinux.cfg/default
default normal
prompt 0
label normal
localboot 0
##instead of above method,loading specific kernel based on individual config, You can have only one default config, let user choose which kernel to load.
$ cat /tftpboot/pxelinux.cfg/default
DEFAULT menu
PROMPT 0
MENU TITLE Select a boot option
TIMEOUT 200
TOTALTIMEOUT 6000
ONTIMEOUT local

LABEL local
MENU LABEL (local)
MENU DEFAULT
LOCALBOOT 0

LABEL centos-i686-5.5
kernel /centos-i686-5.5/vmlinuz
MENU LABEL centos-i686-5.5
append initrd /centos-i686-5.5/initrd.img ks=http://172.16.1.10/pxe/centos.ks ksdevice=link

LABEL centos-x86_64-5.5
kernel /centos-x86_64-5.5/vmlinuz
MENU LABEL centos-x86_64-5.5
append initrd=/centos-x86_64-5.5/initrd.img ks=http://172.16.1.10/pxe/centos.ks ksdevice=link
MENU end

Setup DHCP Server
#Activate dhcpd  on specific NIC only.
$vi /etc/sysconfig/dhcpd
DHCPDARGS=eth1

#Edit dhcpd configuration file
#The client OS is assigned  an fixed IP “172.16.1.128” based on mac address, which can be #retrieved in /var/log/messages when client  boot  from pxe the first time.
$cat /etc/dhcpd.conf
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.sample
ddns-update-style interim;
ignore client-updates;
subnet 172.16.1.0 netmask 255.255.255.0 {
# --- default gateway
option routers                  172.16.1.254;
option subnet-mask              255.255.255.0;
option domain-name              "example.com";
option domain-name-servers      172.16.1.10;
range dynamic-bootp 172.16.1.128 172.16.1.200;
#time unit is 1 sec
default-lease-time 21600;
max-lease-time 43200;
next-server 172.16.1.10;
filename "pxelinux.0";
host host1 {
hardware ethernet 08:00:27:9b:ac:9b;
fixed-address 172.16.1.128;
}
} 
Start dhcp server
$service dhcp start
Boot Client
Change boot order in BIOS to prefer network boot and power on the Server to be provisioned
- Client boots up automatically after finding configuration file  “AC100180”
image

 - Client without its configuration file found, waiting for user’s input

image

Wednesday, June 15, 2011

Red Hat Enterprise Virtualization(RHEV) Notes

The post only highlight some useful notes, for step-by-step instructions, refer to Red Hat RHEV document

RHEV has two components: Red Hat enterprise Virtualization manager(RHEV-M) and managed hypervisor,which could be RHEV-H(RHEV hypervisor, a trim down version of RHEL) or full-blown RHEL 5.5 (64bit) or newer.

Download RHEV
Red Hat doesn’t publish public available evaluation copy, contact sales to get a evaluation copy of RHEV
RHEV-M notes
- RHEV-M 2.2 support Windows 2003 SP2 or Windows 2008 R2, although the RHEV 2.2 document only mentions Windows 2008 R2. Windows 2003 SP2 need some hostfix, just run update all after installing .NET 3.5.1/IIS/PowerShell 2.2.
Windows 2008 is NOT supported.
- RHEV-M can use hosts file instead of DNS, but the “Do not validate fully qualified computer name checkbox” need to be select when install RHEV-M
- RHEV-M login rely on Windows account, which can be a generic local account or AD account.
- RHEV-M's backend DB is  SQL Server 2005, by default, it installs  “SQL Server 2005 express” locally, there is an option to connect to external DB. 
- If the RHEV manager login URL is not redirected after installing trusted certificate and adding trusted website, point URL directly to  Https://FQDN/RHEVmanager/WPFclient.xbap
RHEV-H notes
#RHEV-H boot prompt options
:     #Just press enter to start installation.
:linux rescue     #same as RHEL rescue mode
:linux firstboot   #invoke interactive installation menu
:linux upgrade   #upgrade hypervisor
:linux nocheck   #disable installation media check
#Hypervisor Configuration Menu
Red Hat Enterprise Virtualization Hypervisor release 5.5-2.2
Hypervisor Configuration Menu
1) Configure storage partitions    6) Configure the host for Red Hat Enterprise
Virtualization
2) Configure authentication        7) View logs
3) Set the hostname                8) Install locally and reboot
4) Networking setup                9) Support Menu
5) Register Host to RHN
#options notes
“5) Register Host to RHN” is optional, just configure 1,2,3,4,6 then choose 8
“9) Support Menu” has an option to uninstall  existing RHEV-H
Troubleshoot after RHEV-H has been installed. 
If RHEV-H is successfully connected to RHEV-M, it should be appeared in RHEV-M hosts tab with status “Pending Approval”, click “approve” button will  finalize the installation. (“Add host” option only works for RHEL host used as hypervisor host . RHEV-H,a trim down version of RHEL, has to use registration flow)
 If for some reason, RHEV-H doesn't appear in RHEV-M, check following first
 - RHEV-M  Windows 2003 SP2  has all latest update
 - RHEV-M host name is resolvable, and telnet to the host on 80,443 works.
 - Datetime matched in RHEV-H and RHEV-M, /etc/init.d/ntpd is working
then try to re-register RHEV-H to RHEV-M
#re-invoke the Hypervisor Configuration Menu
$setup                      #select option 6 to re-configure hostname for RHEV-M
#restart registration process
/etc/init.d/vdsm-reg restart
#check registration log
/var/log/vdsm-reg/vdsm-reg.log

#Configure files in RHEV-H 
#vdsm registration script
#register itself to RHEV-M, it seems it doesn't need to be running once registration is successful
/etc/init.d/vdsm-reg                 #start-up script, 
/etc/vdsm-reg/vdsm-reg.conf     #configuration file
/var/log/vdsm-reg/vdsm-reg.log    #log file
#Management agent
#by default, listening on port 54321 to communicate with RHEV-M
/etc/init.d/vdsmd
/etc/vdsm/vdsm.conf
/var/log/vdsm/vdsm.log
You are not supposed to create new configuration files  in RHEV-H, any new files in  /etc/ will be lost after reboot. To survive reboot, you need copy your customization files, e.g /etc/hosts, /etc/resolv.conf, to “/config/etc/” once. Next time RHEV-H boots up, it will synchronize all files in /config/etc/* to /etc
NFS store
- The NFS export must writable by vdsm:kvm, (uid:gid 36:36)
- RHEV-M has a windows tool to upload ISO files to ISO domain, The tool go through 2 steps:first upload to SPM(Storage Pool Manager) host, then move from SPM host to NFS. You can actually winscp to NFS directly, then change file ownership to  vdsm:kvm.
Guest OS notes
- RHEV 2.2 doesn't support auto-start  guest OS, which means if RHEV-M and RHEV-H are rebooted, someone has to login  RHEV-M to click “run” for each VM 
- RHEL 5.x has built-in VirtIO driver for  harddisk and network 
- Windows Guest need the virtual floppy file virtio*.vfd copied to ISO domain and mount the floppy (select “run once” select the file as floppy drive)  in order for Windows to recognize VirtIO harddisk. Once Windows boots up, install “Guest tools”  for VirtIO NIC driver.

Saturday, June 11, 2011

Red Hat RHEV vs Vmware ESX

In 2009, Red Hat launched Red Hat enterprise Virtualization (RHEV)  to compete in commercial virtualization market dominated by VMware. RHEV has two components: Red Hat enterprise Virtualization manager(RHEV-M) and managed hypervisor,which could be RHEV-H(RHEV hypervisor, a trim down version of RHEL) or full-blown RHEL 5.5 (64bit) or newer
Feature wise, in paper,  RHEV looks not too bad, However what will be revealed if dug  further into technical details and compared with VMware?
RHEV 2.2 ESX 4
Manager
Name RHEV-M vCenter
Compatible  OS Windows 2003
Windows 20008 R2
Windows XP
Windows 2003
Windows 2008
Windows 2008 R2
Backend DB Microsoft SQL Server Microsoft SQL server
Oracle
Application Type Web application
(WPF .xbap application)
Windows native application
User Interface Web UI Web UI
Windows native application
CLI [1] Powershell Powershell(PowerCLI)
vCLI
SDK&API Powershell Powershell, Perl,C#, Java
Hypervisor
Type Linux kernel (KVM) Proprietary
Manager Agent Python script Binary daemon
HA/Migration [2] YES YES
Manager independent [3] NO YES
CLI [4] NO esxcfg-*/vimsh  commands
SDK&API NO Powershell, Perl,C#, Java
Storage  Type  [5] NFS/iSCSI/FC local disk/NFS/iSCSI/FC
Guest OS
supported OS [6] Red Hat Enterprise Linux
Windows
All major Linux distributions
Windows
Solaris
Mac OS/BSD
Clone [7] Supported supported
Snapshot [8] limited support supported
Supported Hard disk [9] IDE, VirtIO IDE,SCSI
Cost ~2/3 of VMware cost expensive


NOTES:
[1]  Manager CLI:  RHEV-M PowerShell has fewer number of cmdlets compared to PowerCLI

[2] Manager independent: In my opinion, it is RHEV’s  biggest mistake in design. RHEV-M is the central brain, the hypervisor is dummy host, which means you are NOT supposed to login to hypervisor to do configuration or VM operation,  e.g. add virtual network or start/stop vms. All must be done in RHEV-M. On the other hand, each VMware  ESX host is intelligent by design,  you can perform almost anything by esxcfg*/vimsh commands. ESX host just rely manager for HA and Distributed Resource Scheduling.(if RHEV-M fails, VMs in RHEV-H will not be interrupted, but don’t touch them, because you can’t restart them without RHEV-M)

[3] Hypervisor  HA: RHEV requires a form of fencing method for HA, e.g smart power switch or LOM card to shoot hypervisor in the head.

[4] Hypervisor CLI:  libvirt CLI tools are supported in KVM, but RHEV doesn’t use libvirt.

[5] Storage Type: You can’t utilize RHEV-H local storage, it is not visible in manager.RHEV datacenter  has a "storage type" (NFS/iSCSI/FC)  attribute, only single storage domain with the same type can be attached to datacenter.

[6] Supported guest OS: In paper, RHEL and Windows are the only supported OS, but you can  install almost any x86 OS, because RHEV-H is based on KVM not para-virtualization

[7] Clone: RHEV doesn’t call it clone,  You have to choose a template when creating new VM. VMware support clone from template or VM.

[8] Snapshot: You have to  shutdown  RHEV VM to snapshot it.

[9] VirtIO: RHEL 5.x has built-in VirtIO driver, Other Linux should also has VirtIO driver. for windows,  RHEV provide Virtual floppy file, virtio*.vfd,  to be used  during  installation. Any other OS without VirtIO has to use IDE (SCSI is not supported, VirtIO is supposed to deliver better performance than SCSI)

Conclusion:
In my opinion, so far, RHEV Server is not enterprise ready due to limitations of  [3] , [4],  and [8]. RHEV  Server lose to VMware ESX in almost every feature compared, However, RHEV does a better job in desktop virtualization thanks to Qumranet, whose root was desktop virtualization. (In 2008, Red Hat acquired Qumranet, from which the RHEV-M originated).

It is reported that Red Hat is developing RHEV 3, which will be based on Jboss (Java)  in Linux with PostgreSQL DB backend. Hopefully, RHEV 3 can redesign RHEV-H to make it “intelligent” by integrating libvirt for CLI ability in hypervisor.

Wednesday, June 8, 2011

Create a shell script to display progress meter like wget's meter style

The following is a shell script to display progress meter like wget's meter style
The shell script output
$./meter.sh 
27 % |======================>                                                                   |
The shell script source code
$cat ./meter.sh
#!/bin/ksh
#Given start number and end number, display  progress meter and percentage like wget's style
start=1
#end=333
end=33
scale=$(($end/100.0))
for i in {$start..$end}
do
m1=$(($i / $scale)); m2=$(( ($end - $i ) / $scale ))
integer m1; integer m2
#fill 2 segments of variable length  with zeros
str=$( printf "%0${m1}d %s %0${m2}d\n"   0 ">" 0  ) 
str="|$str|"
#replace first segment zeros with '=' 2nd segment zeros with space then re-join
str1=$(echo $str | awk -F' > ' '{ print $1 }'); str1=${str1//0/=}
str2=$(echo $str | awk -F' > ' '{ print $2 }'); str2=${str2//0/' '}
str="${str1}>${str2}"
pct=$(($i * 100 / $end ))
#Beautify the final loop 
[ $i -eq $end ] && str=$(echo $str | sed -e 's/ /=/g' -e 's/>/=/g' -e 's/0/=/g' )
print -n " \\r ${pct} % $str "
sleep 1
done
printf "\n"