Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Thursday, June 20, 2013

VMware PowerCLI: Map datastore name to LUN devicename.

It is not obvious as it is thought to be to map datastore name to LUN devicename in native PowerCLI codes,
The  esxcli interface exposed to PowerCLI  make it very easy.(Tested in ESXi 5.0)
PowerCLI>$esxcli=get-esxcli -vmhost esx01
PowerCLI>$esxcli.storage.vmfs.extent.list() | ft devicename,volumename -autosize

DeviceName                           VolumeName
----------                           ----------
naa.600601605bc02e00007fb97cacbee211 datastore01
naa.600601605bc02e00fac8cd88acbee211 datastore02

Wednesday, November 21, 2012

Enable Windows Active Directory Authentication in vSphere 5.1.

vSphere Single Sign On is a new feature in vSphere 5.1, vSphere SSO controls authentication service, so you  can no longer  add new authentication provider in vCenter by standard vSphere client. It has to be done in vSphere webclient, which can talk to vSphere SSO service.

Steps to add Windows Active Directory provider.
1.Create a generic user in AD for LDAP search, define user and group base DN.
2.Install Webclient from vCenter installation media, just like vSphere Client, it doen’t need to be installed on vCenter server.
3.Launch Webclient https://client-ip:9443/vsphere-client and login
The account used for login is important, if you installed SSO service when login with local account, local account can  login Webclient, But it doesn’t  have permission to configure SSO, you have to login with the default SSO account “admin@System-Domain” created during installation.
4.Navigate to Administration/Sign-on and Discovery/configuraiton( the configurion node won’t be shown, if login with local Windows account), and add “+” sign to add identity sources.
The login credentials will be sent in clear text with ldap, if it is a concern, enable ldaps by creating certificate
The username should be in LDAP syntax, find the exact string in ADSI edit tool in AD.

clip_image002[4]

Friday, July 6, 2012

Setup VMware vCenter 5 to use Oracle 11g R2 database

VMware vCenter supports DB2,Oracle or MS SQL server as backend database, the built-in database for vCenter in Windows is SQL Server 2008 express,which has limits on disk space and memory, it is not suitable for enterprise. For enterprise deployment, it is recommended to use proper database engine, such as Oracle 11g R2
Install Oracle Database

Select a compatible version of Oracle listed in VMware website.
http://www.vmware.com/resources/compatibility/sim/interop_matrix.php

The Versions used in this test:
   - vCentre 5.0.0 build 623373
   - Oracle 11g R2 11.2.0.3.0

Setup Oracle database for vCenter

- Create an Oracle SQL login account for vCenter
Estimate vCentre database tablespace size requirement.
vCenter has a tool to estimate the size.
vCenter->administration->Server setting->statistics
for example, to keep data of 500 VMs for 1 year needs ~5GB storage

old data can be automatically purged by setting up retention policy
vCenter->administration->Server setting->database retention policy

Extract from “vSphere Installation and Setup” document

#1 Log in to a SQL*Plus session with the system account.
#2 Run the following SQL command to create a vCenter Server database user with the correct permissions.
#The script is located in the vCenter Server {installation media}/vcenter/dbschema/DB_and_schema_creation_scripts_oracle.txt file.
#In this example, the user name is VPXADMIN.
CREATE USER "VPXADMIN" PROFILE "DEFAULT" IDENTIFIED BY "oracle" DEFAULT TABLESPACE
"VPX" ACCOUNT UNLOCK;
grant connect to VPXADMIN;
grant resource to VPXADMIN;
grant create view to VPXADMIN;
grant create sequence to VPXADMIN;
grant create table to VPXADMIN;
grant create materialized view to VPXADMIN;
grant execute on dbms_lock to VPXADMIN;
grant execute on dbms_job to VPXADMIN;
grant select on dba_tablespaces to VPXADMIN;
grant select on dba_temp_files to VPXADMIN;
grant select on dba_data_files to VPXADMIN;
grant unlimited tablespace to VPXADMIN;
#By default, the RESOURCE role has the CREATE PROCEDURE, CREATE TABLE, and CREATE
#SEQUENCE privileges assigned. If the RESOURCE role lacks these privileges, grant them to the vCenter
#Server database user.
#NOTE Instead of granting unlimited tablespace, you can set a specific tablespace quota. The
#recommended quota is unlimited with a minimum of at least 500MB. To set an unlimited quota, use the
#following command.
#alter user "VPXADMIN" quota unlimited on "VPX";
#If you set a limited quota, monitor the remaining available tablespace to avoid the following error.
#ORA-01536: space quota exceeded for tablespace '<tablespace>'
#3 (Optional) After you have successfully installed vCenter Server with the Oracle database, you can revoke
#the following privileges.
revoke select on dba_tablespaces from VPXADMIN;
revoke select on dba_temp_files from VPXADMIN;
revoke select on dba_data_files from VPXADMIN;


Prepare Windows server for vCenter



- Install Oracle ODBC client

Download both basic and ODBC client from Oracle website.


instantclient-basic-windows.x64-11.2.0.3.0.zip


instantclient-odbc-windows.x64-11.2.0.3.0.zip



#unzip instantclient-basic-windows.x64-11.2.0.3.0.zip  to: 
C:\Program Files\Oracle\instantclient_11_2
unzip instantclient-odbc-windows.x64-11.2.0.3.0.zip  to the same directory as basic installant client
run odbc_install.exe in command line
C:\Program Files\Oracle\instantclient_11_2>odbc_install.exe
Oracle ODBC Driver is installed successfully. 
mkdir C:\Program Files\Oracle\instantclient_11_2\network\admin
#copy tnsnames.ora  on Oracle server to the directory 
#Add new Windows  system variable "ORACLE_HOME=C:\Program Files\Oracle\instantclient_11_2" 
#system variable take effects immediately, open a new command line to check this: 
C:\ >echo %ORACLE_HOME%
C:\Program Files\Oracle\instantclient_11_2 


open "odbc source" in administrative tools, create "system DSN", select oracle driver, type in service name defined in tnsnames.ora,username etc

Make sure "test connection" result is ok


Install vCenter

Follow the installation Wizard to install vCenter, you may receive warning about the Oracle client need to be updated, select ok to continue

Update  vCenter ojdbc client


the default JDBC client in vCenter maybe old, (you can check its version by renamed ojdbc5.jar to ojdbc5.zip and open it to check meta-inf file)



cd /d "C:\Program Files\VMware\Infrastructure\tomcat\lib\"
#backup original file
copy ojdbc5.jar ojdbc5.jar.orig
#overwrite with new ojdbc5.jar from instant client
copy C:\Program Files\Oracle\instantclient_11_2\ojdbc5.jar   ojdbc5.jar

Thursday, May 17, 2012

VMware vDS alternative, Cisco Nexus 1000V quickstart

Cisco Nexus 1000V  is a virtual switch running Cisco NX-OS Software, it is similar to vSphere Distributed Switch.

The Cisco Nexus 1000V has two main components:
  - Virtual supervisor module (VSM)
A VM running in vSphere ESXi server (a standalone ESXi server or a shared ESXi server hosting both VSM and VEM)
Provides CLI interface for managing Nexus 1000v switch
Controls multiple VEM as a single network device
- Virtual Ethernet module (VEM)
An addon module to be installed on ESXi hypervisor, which controls vem daemons.
A kind of vsphere distributed switch
Independent of VSM in terms of operation, if VSM fails, VEM continues continue to forward
traffic, even its parts of configuration  can be managed by vemcmd.
Cisco N1000V specific traffic types:
- Management traffic: Traffic for the VSM management interface and for VMware vCenter Server falls into
this category. VMware vCenter Server requires access to the VMware ESX management interface to
monitor and configure the VMware ESX host. Management traffic usually has low bandwidth requirements,
but it should be treated as high-priority traffic
- Control traffic: Control traffic is generated by the Cisco Nexus 1000V Series and exchanged between the
primary and secondary VSMs as well as between the VSMs and VEMs. It requires little bandwidth (less
than 7 MB) but demands absolute priority.
- Packet traffic: Packet traffic transports selected packets to the VSM for processing. The bandwidth required
for the packet interface is extremely low, and its use is intermittent. If the Cisco Discovery Protocol and
IGMP features are turned off, there is no packet traffic at all.
- System vlan: system vlan enables VEM to forward traffic even when the communication to VSM is lost. The system VLAN is mandatory for above 3 types of traffic and VMware management interface, it is also recommended for other vmkernel traffic e.g. vMotion,ISCSI/NFS.
Cisco n1000V requirements:
vSphere ESX/ESXi 4 or higher (check the Compatibility guide in Cisco website for details)
vSphere ESX/ESXi must has enterprise plus license (n100v is a kind of distributed Switch)
vSphere vCenter (VSM needs to install plugin into vCenter)
ESX/ESXi host must have at least 2 NICs, if you plan to install both VSM and VEM in the same host.
N1000V VM must use thick disk type and network interface must use E1000
Cisco n1000v pros and cons
- Pros:
Because Cisco n1000v running Cisco NX-OS, it offers additional features over vSphere distributed switch.
Central configuration by NX-OS CLI, just feel like a physical switch. E.g all live VMs’s interface can be seen in “show run” , an access list can be applied in the interface.
True end to end QOS, apart from being allocated specific bandwidth by policy-map, traffic leaving Cisco n1000v is marked up with DSCP value, which the upstream Cisco switch understands.VMware NetIOC offer bandwidth allocation by “shares”, but it is local to VMware hypervisor only. (update: VMware,VMware 5 supports IEEE 802.1p CoS value, which makes end to end QOS possible)
True LACP port-channel, VMware doesn’t support LACP bonding without Cisco N1000v
- Cons:
Additional license cost
Unlike distributed switch, which is VMware built-in, VSM is third party VM. Even VSM supports HA and VSM failure doesn’t stop VEM from functioning. Without VSM, it is impossible to make configuration change.

Cisco n1000v deployment procedures

1) Download software
Download free trial of ESXi host and vCenter from VMware site, Download free trial of N1000V from Cisco website.(check the compatible guide to determine the exact versions)
In my test, I used
  - VMware ESXi 5.0.0 build-474610
  - VMware vCenter Server 5.0.0 build-623373
  - Cisco N1000v version 4.2(1)SV1(5.1)
NOTE: N1000V installation procedure may vary with different versions, the following is for verson 4.2(1)SV1(5.1).
2) Install ESXi and vCenter
Create 3 port-groups for management, control and packet traffic of N1000V (You can create separate VLAN for each type, but sharing the same VLAN as management interface of ESXi host  is sufficient , so all three port-groups use the same VLAN ID)
3) Install N1000V VSM
Unzip the downloaded Cisco N1000V, connect to vCenter, deploy the OVA file located in “Nexus1000v.4.2.1.SV1.5.1\VSM\Install”. Follow the wizard to deploy the OVA (NOTE:  use the default setting, disk type must use thick provisioning, NIC must use E1000)
Start up N1000V VM, login to console with the credentials supplied earlier, run “setup” to start the wizard to configure n1000v. most options are self explanatory, the following are noted
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended. 
But N1000V is layer 2 device, ip routing is not supported)
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended. 
But N1000V is layer 2 device, ip routing is not supported)

4) Establish connection between VSM and vCenter

#Launch the JAVA installer
C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VC
#Following the wizard to establish connection between VSM and vCenter, the result is that a new distributed switch will be created in “networking “ view in vCenter.
Verify the connection in N1000V CLI:
nv1> show svs connections
….
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.0.0 build-623373
vc-uuid: F1D0CEBA-C365-4F55-830D-A0B9BB6F8520
#Don’t continue to the next step without a successful connection is seen.


5) Install N1000V VEM


C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VEM
#Launch the Java installer, which will connect to vCenter and VSM to push VEM module to ESX host.
#You can also install the VEM module manually.
C:\Nexus1000v.4.2.1.SV1.5.1\VEM\cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib
#Transfer the file to ESXi 5 host and run 
ESXi>esxcli software vib install -v /tmp/cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib

6) Establish connection between VEM and VSM

#Because VEM and VSM are on the host, you need to put VEM VSM into separate physical NICs in order to migrating from standard switch to N1000V switch.
#Create port profile on N1000V
#create an uplink profile to be linked to ESXi host pNIC
# note the type is Ethernet and switchport is trunk
port-profile type ethernet vm-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 3-10
no shutdown
system vlan 3,5-6
state enabled
#Create port-group for vmkernel management interface.
#note the type is venthernet and “capability l3control” and system vlan are mandatory
port-profile type vethernet L3vmkernel
capability l3control
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
# the following  3 profiles for N1000V port-groups, system vlan is mandatory
port-profile type vethernet ds_ctl
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_mgt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_pkt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
#the following is for generic VM traffic, vCentre VM will be migrated to this in the first state.
#system vlan is optional
port-profile type vethernet vmdata
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled

After successfully execution of above commands, 5 objects will be created under the switch in “networking” view.
Right click the switch and select “add host”, in select physical adapter, select “vmnic1” in uplink group select “vm-uplink”
Don’t migrate vmk0 yet.
In migrate virtual machine networking,  migrate N1000v  from standard switch to its N1000V switch , migrate vCentre VM to “vmdata” group.
Make sure N1000V VM and vCenre VM were migrated successfully by checking that   “show svs connections”

Next step is to migrate vmk0.

Click the N1000V switch, in configuration tab, select “manage hosts”, select migrate vmk0 to “L3vmkernel “ portgroup, now everything has been migrated from vmnic0 to vmnic1, vmnic0 is spare, you can create port-channel in N1000V and then migrate vmnic0 as well.

Only after a successful migration of vmk0, VEM to VSM connection is established and VEM module is seen in Switch

nv1> show module
Mod Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
3    248    Virtual Ethernet Module           NA                  ok


Other notes:

 Unlike VMware standard switch, port-groups “grow” under a switch, which, in turn, is linked to pNIC, so there is a clear one to one mapping relationship between port-group and pNIC. In N1000V switch, the relationship between port-group and pNIC is in line with VLAN relationship.

Unlike VMware standard switch,port-group can be tagged for “vMotion” traffic in GUI directly,  You have to this for N1000V in following steps
 Switch to “hosts and clusters” view, click the host in management tab, click “vSphere Distributed Switch” view, click “manage virtual adapters” add a vmkernel adapter and in connection setting select “use this adapter for vMotion”.


#Sample QOS configuration bound for a port-group
policy-map type qos po-vmdata
class class-default
police cir 1500 mbps bc 200 ms conform transmit violate set dscp dscp table pir-markdown-map
port-profile type vethernet vmdata
service-policy input po-vmdata
service-policy output po-vmdata
#Control VEM in ESXi host directly when VSM is not available.
#unblock a port by defining correct system vlan
ESXi > vemcmd show port
LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
49                DOWN   UP   BLK       0        testlinux.eth0
ESXi > vemset system-vlan 3 ltl 49
ESXi > vemcmd show port
LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
49                UP   UP    FWD       0        testlinux.eth0

Cisco Nexus 1000V Series Switches download and document links
http://www.cisco.com/en/US/products/ps9902/index.html

ESXi 4 kickstart and ESXi 5 kickstart examples

What is changed in ESXi 5 kickstart:
1. ESXi 5 supports EFI PXE boot(the file is under install media\EFI\boot)
2. ESXi 5 loads all needed packages into memory, there is no need to specify installation media in http/nfs etc
3. ESXi 5 supports vlan tagging by –vlanid parameter, previously PXE boot NIC has to be native vlan (VLAN 1)
4. ESXi 5 no longer supports set license data in kickstart file

ESXi 4 kickstart file example

vmaccepteula
#create encrypted password with command “openssl passwd -1”
rootpw --iscrypted $1$XXXXXXXXXXXXXXXX
autopart --firstdisk=mpx.vmhba32:C0:T0:L0 --overwritevmfs
install url http://172.16.1.18/boot/XXXX
network --device=vmnic0   --hostname=esxi1  --bootproto=static  --ip=172.16.1.4 --netmask=255.255.255.224 --gateway=172.16.1.30 --nameserver="172.16.1.18,172.16.1.19"
serialnum --esx=4H42M-XXXX-XXXXX-XXXXX-XXXXX
%post --interpreter=busybox --ignorefailure=true
reboot
#enable local console and remote ssh login
%firstboot --interpreter=busybox
/etc/init.d/TSM start
/etc/init.d/TSM-SSH  start
/sbin/chkconfig TSM on
/sbin/chkconfig TSM-SSH on

ESXi 5 kickstart file example

vmaccepteula
#create encrypted password with command “openssl passwd -1”
rootpw --iscrypted $1$XXXXXXXXXXXXXXXX
clearpart --firstdisk=local --overwritevmfs
install --firstdisk --overwritevmfs
#ESXi 5 retrieve software from the image loaded into memory by PXE, so there is no need to specify software repository location
network --device=vmnic0  --vlanid=3 --hostname=esxi2  --bootproto=static  --ip=172.16.1.5 --netmask=255.255.255.224 --gateway=172.16.1.30 --nameserver="172.16.1.18,172.16.1.19"
#loading license data in kickstart file not longer supported in ESXi 5
#serialnum --esx=4H42M-XXXX-XXXXX-XXXXX-XXXXX
%post --interpreter=busybox --ignorefailure=true
reboot
#enable local console and remote ssh login
%firstboot --interpreter=busybox
/etc/init.d/SSH start
/etc/init.d/ESXShell start
/sbin/chkconfig SSH on
/sbin/chkconfig ESXShell  on

Monday, May 7, 2012

PXE kickstart VMware ESXi 5 on a trunked interface (Tagged VLAN)

PXE Kickstart VMware ESX host was usually performed on the native VLAN, It was not possible in a trunked interface (Tagged VLAN) due to two obstacles:

The above have been overcome by the following technologies:
1. Mutiple boot agent(MBA):  NIC supports VLAN tag in BIOS, such as MBA of Broadcom NICs (otherwise PXE client couldn’t get IP address from DHCP in the first place)
2. VMware ESXi 5: A new parameter, vlanid, is introduced as a boot option.(not  the same as vlanid in kickstart configuration file)

Configure MBA for Broadcom NetXtreme II BCM5709 in IBM X3850 server
F1 go to BIOS->System Setting->Network->select the network adapter in device list -> configure mutiple boot agent(MBA) and iSCSI parameter -> MBA configuration menu-> vlan mode/vlan id

Add VLANID in ESXi 5 boot option

A sample boot.cfg for gPXE

bootstate=0
title=Loading ESXi installer
kernel=http://pxe-server.example.com/boot/os/esxi-5.0-x64/pxeboot/tboot.b00
#BOOTIF is mandatory for gPXE, it seems ESXi 5 couldn’t get  name server and domain name and from DHCP, So the name server is assigned manually.
kernelopt=BOOTIF=01-5c-f3-fc-94-e4-18 vlanid=3 nameserver=172.16.1.1  ks=http://pxe-server.example.com/boot/hosts/linux-ks/5c-f3-fc-94-e4-18.txt
modules=http://pxe-server.example.com//boot/os/esxi-5.0-x64/pxeboot/b.b00  ... < ..omitted ..>

Thursday, January 5, 2012

Vsphere PowerCLI script to clone and customize Windows guest OS


VMware can also customize Windows guest OS by Windows sysprep tool, though the process is more complex than Linux guest OS

There are two options to clone and “sysprep” VMware Windows guest OS:
1.Install sysprep tools in Windows guest OS and  run sysprep.exe  in guest OS command line, then clone it by VMware
2.Install sysprep tools in Virutal Center and let VMware tools in Windows guest in to control sysprep process either by GUI or script.  (It seems sysprep rely on VMware tools, so the VMware tools must be installed in guest OS)

Option #2 is the preferred method, because you can use script to easily customize unique information e.g computer name, ip addresses etc.

Install sysprep tools in Virtual Center
VMware KB: Sysprep file locations and versions 
Windows Vista onwards(vista/2008/7/2008R2/) don’t need this step, because its sysprep is built-in.
Create guest customization by GUI 
 Virtual Center -> view-> management -> “customization specification manager”. Create a new customization and select it when asked for guest customization information in GUI clone action. 
Create guest customization by script. 
Customization by GUI is not flexible, customization by script can be created on the fly. The following is enhancement to the clonevm.ps1 in Vsphere PowerCLI script to clone and customize Linux guest OS, just replace the “Identity for Linux” part with following code block 

$s_ostype=Retrieve-values $lines "ostype"
if ( $s_ostype -eq "linux" ) { 
## Identity for Linux 
$vmclonespec_os.identity= New-Object VMware.Vim.CustomizationLinuxPrep
$vmclonespec_os.identity.hostname= New-Object VMware.Vim.CustomizationFixedName
$vmclonespec_os.identity.hostname.name= $s_dstname
$vmclonespec_os.identity.domain=$s_domain
}
elseif ( $s_ostype -eq "windows" ) {
# WinOptions
$vmclonespec_os.Options = New-Object  VMware.Vim.CustomizationWinOptions
$vmclonespec_os.Options.ChangeSID = 1
#sysprep
$vmclonespec_os.identity = New-Object VMware.Vim.CustomizationSysprep
# GUIUnattended
$vmclonespec_os.Identity.GuiUnattended = New-Object VMware.Vim.CustomizationGuiUnattended
$vmclonespec_os.Identity.GuiUnattended.AutoLogon = 0
#timezone codes: http://msdn.microsoft.com/en-us/library/ms145276(v=sql.90).aspx
$vmclonespec_os.Identity.GuiUnattended.TimeZone  = 255
$vmclonespec_os.Identity.GuiUnattended.Password = New-Object VMware.Vim.CustomizationPassword
$vmclonespec_os.Identity.GuiUnattended.Password.PlainText = 1
$vmclonespec_os.Identity.GuiUnattended.Password.Value = "Secret01"
# Identification
$vmclonespec_os.Identity.Identification = New-Object VMware.Vim.CustomizationIdentification
$vmclonespec_os.Identity.Identification.joinWorkgroup = "workgroup2"
## Userdata
$vmclonespec_os.identity.userData = New-Object VMware.Vim.CustomizationUserData
$vmclonespec_os.identity.userData.computerName = New-Object VMware.Vim.CustomizationFixedName
$vmclonespec_os.identity.userData.computerName.name = $s_dstname
$vmclonespec_os.Identity.UserData.FullName = "Administrator"
$vmclonespec_os.Identity.UserData.OrgName="myOrg"
$vmclonespec_os.Identity.UserData.Productid=""
}
else {
write-host "Unknown ostype: $s_ostype. Please set it to linux or windows"
exit
}
}

NOTES: 
 - The guest OS(Windows/Linux) will be forcefully rebooted by Vmware tools again in ~1 minute after you power it on, so don’t login in a hurry to check the result.
 - The script only works in a live session to Virtualcenter, it doesn’t work in a direct login session to ESX host.

Wednesday, June 15, 2011

Red Hat Enterprise Virtualization(RHEV) Notes

The post only highlight some useful notes, for step-by-step instructions, refer to Red Hat RHEV document

RHEV has two components: Red Hat enterprise Virtualization manager(RHEV-M) and managed hypervisor,which could be RHEV-H(RHEV hypervisor, a trim down version of RHEL) or full-blown RHEL 5.5 (64bit) or newer.

Download RHEV
Red Hat doesn’t publish public available evaluation copy, contact sales to get a evaluation copy of RHEV
RHEV-M notes
- RHEV-M 2.2 support Windows 2003 SP2 or Windows 2008 R2, although the RHEV 2.2 document only mentions Windows 2008 R2. Windows 2003 SP2 need some hostfix, just run update all after installing .NET 3.5.1/IIS/PowerShell 2.2.
Windows 2008 is NOT supported.
- RHEV-M can use hosts file instead of DNS, but the “Do not validate fully qualified computer name checkbox” need to be select when install RHEV-M
- RHEV-M login rely on Windows account, which can be a generic local account or AD account.
- RHEV-M's backend DB is  SQL Server 2005, by default, it installs  “SQL Server 2005 express” locally, there is an option to connect to external DB. 
- If the RHEV manager login URL is not redirected after installing trusted certificate and adding trusted website, point URL directly to  Https://FQDN/RHEVmanager/WPFclient.xbap
RHEV-H notes
#RHEV-H boot prompt options
:     #Just press enter to start installation.
:linux rescue     #same as RHEL rescue mode
:linux firstboot   #invoke interactive installation menu
:linux upgrade   #upgrade hypervisor
:linux nocheck   #disable installation media check
#Hypervisor Configuration Menu
Red Hat Enterprise Virtualization Hypervisor release 5.5-2.2
Hypervisor Configuration Menu
1) Configure storage partitions    6) Configure the host for Red Hat Enterprise
Virtualization
2) Configure authentication        7) View logs
3) Set the hostname                8) Install locally and reboot
4) Networking setup                9) Support Menu
5) Register Host to RHN
#options notes
“5) Register Host to RHN” is optional, just configure 1,2,3,4,6 then choose 8
“9) Support Menu” has an option to uninstall  existing RHEV-H
Troubleshoot after RHEV-H has been installed. 
If RHEV-H is successfully connected to RHEV-M, it should be appeared in RHEV-M hosts tab with status “Pending Approval”, click “approve” button will  finalize the installation. (“Add host” option only works for RHEL host used as hypervisor host . RHEV-H,a trim down version of RHEL, has to use registration flow)
 If for some reason, RHEV-H doesn't appear in RHEV-M, check following first
 - RHEV-M  Windows 2003 SP2  has all latest update
 - RHEV-M host name is resolvable, and telnet to the host on 80,443 works.
 - Datetime matched in RHEV-H and RHEV-M, /etc/init.d/ntpd is working
then try to re-register RHEV-H to RHEV-M
#re-invoke the Hypervisor Configuration Menu
$setup                      #select option 6 to re-configure hostname for RHEV-M
#restart registration process
/etc/init.d/vdsm-reg restart
#check registration log
/var/log/vdsm-reg/vdsm-reg.log

#Configure files in RHEV-H 
#vdsm registration script
#register itself to RHEV-M, it seems it doesn't need to be running once registration is successful
/etc/init.d/vdsm-reg                 #start-up script, 
/etc/vdsm-reg/vdsm-reg.conf     #configuration file
/var/log/vdsm-reg/vdsm-reg.log    #log file
#Management agent
#by default, listening on port 54321 to communicate with RHEV-M
/etc/init.d/vdsmd
/etc/vdsm/vdsm.conf
/var/log/vdsm/vdsm.log
You are not supposed to create new configuration files  in RHEV-H, any new files in  /etc/ will be lost after reboot. To survive reboot, you need copy your customization files, e.g /etc/hosts, /etc/resolv.conf, to “/config/etc/” once. Next time RHEV-H boots up, it will synchronize all files in /config/etc/* to /etc
NFS store
- The NFS export must writable by vdsm:kvm, (uid:gid 36:36)
- RHEV-M has a windows tool to upload ISO files to ISO domain, The tool go through 2 steps:first upload to SPM(Storage Pool Manager) host, then move from SPM host to NFS. You can actually winscp to NFS directly, then change file ownership to  vdsm:kvm.
Guest OS notes
- RHEV 2.2 doesn't support auto-start  guest OS, which means if RHEV-M and RHEV-H are rebooted, someone has to login  RHEV-M to click “run” for each VM 
- RHEL 5.x has built-in VirtIO driver for  harddisk and network 
- Windows Guest need the virtual floppy file virtio*.vfd copied to ISO domain and mount the floppy (select “run once” select the file as floppy drive)  in order for Windows to recognize VirtIO harddisk. Once Windows boots up, install “Guest tools”  for VirtIO NIC driver.

Saturday, June 11, 2011

Red Hat RHEV vs Vmware ESX

In 2009, Red Hat launched Red Hat enterprise Virtualization (RHEV)  to compete in commercial virtualization market dominated by VMware. RHEV has two components: Red Hat enterprise Virtualization manager(RHEV-M) and managed hypervisor,which could be RHEV-H(RHEV hypervisor, a trim down version of RHEL) or full-blown RHEL 5.5 (64bit) or newer
Feature wise, in paper,  RHEV looks not too bad, However what will be revealed if dug  further into technical details and compared with VMware?
RHEV 2.2 ESX 4
Manager
Name RHEV-M vCenter
Compatible  OS Windows 2003
Windows 20008 R2
Windows XP
Windows 2003
Windows 2008
Windows 2008 R2
Backend DB Microsoft SQL Server Microsoft SQL server
Oracle
Application Type Web application
(WPF .xbap application)
Windows native application
User Interface Web UI Web UI
Windows native application
CLI [1] Powershell Powershell(PowerCLI)
vCLI
SDK&API Powershell Powershell, Perl,C#, Java
Hypervisor
Type Linux kernel (KVM) Proprietary
Manager Agent Python script Binary daemon
HA/Migration [2] YES YES
Manager independent [3] NO YES
CLI [4] NO esxcfg-*/vimsh  commands
SDK&API NO Powershell, Perl,C#, Java
Storage  Type  [5] NFS/iSCSI/FC local disk/NFS/iSCSI/FC
Guest OS
supported OS [6] Red Hat Enterprise Linux
Windows
All major Linux distributions
Windows
Solaris
Mac OS/BSD
Clone [7] Supported supported
Snapshot [8] limited support supported
Supported Hard disk [9] IDE, VirtIO IDE,SCSI
Cost ~2/3 of VMware cost expensive


NOTES:
[1]  Manager CLI:  RHEV-M PowerShell has fewer number of cmdlets compared to PowerCLI

[2] Manager independent: In my opinion, it is RHEV’s  biggest mistake in design. RHEV-M is the central brain, the hypervisor is dummy host, which means you are NOT supposed to login to hypervisor to do configuration or VM operation,  e.g. add virtual network or start/stop vms. All must be done in RHEV-M. On the other hand, each VMware  ESX host is intelligent by design,  you can perform almost anything by esxcfg*/vimsh commands. ESX host just rely manager for HA and Distributed Resource Scheduling.(if RHEV-M fails, VMs in RHEV-H will not be interrupted, but don’t touch them, because you can’t restart them without RHEV-M)

[3] Hypervisor  HA: RHEV requires a form of fencing method for HA, e.g smart power switch or LOM card to shoot hypervisor in the head.

[4] Hypervisor CLI:  libvirt CLI tools are supported in KVM, but RHEV doesn’t use libvirt.

[5] Storage Type: You can’t utilize RHEV-H local storage, it is not visible in manager.RHEV datacenter  has a "storage type" (NFS/iSCSI/FC)  attribute, only single storage domain with the same type can be attached to datacenter.

[6] Supported guest OS: In paper, RHEL and Windows are the only supported OS, but you can  install almost any x86 OS, because RHEV-H is based on KVM not para-virtualization

[7] Clone: RHEV doesn’t call it clone,  You have to choose a template when creating new VM. VMware support clone from template or VM.

[8] Snapshot: You have to  shutdown  RHEV VM to snapshot it.

[9] VirtIO: RHEL 5.x has built-in VirtIO driver, Other Linux should also has VirtIO driver. for windows,  RHEV provide Virtual floppy file, virtio*.vfd,  to be used  during  installation. Any other OS without VirtIO has to use IDE (SCSI is not supported, VirtIO is supposed to deliver better performance than SCSI)

Conclusion:
In my opinion, so far, RHEV Server is not enterprise ready due to limitations of  [3] , [4],  and [8]. RHEV  Server lose to VMware ESX in almost every feature compared, However, RHEV does a better job in desktop virtualization thanks to Qumranet, whose root was desktop virtualization. (In 2008, Red Hat acquired Qumranet, from which the RHEV-M originated).

It is reported that Red Hat is developing RHEV 3, which will be based on Jboss (Java)  in Linux with PostgreSQL DB backend. Hopefully, RHEV 3 can redesign RHEV-H to make it “intelligent” by integrating libvirt for CLI ability in hypervisor.

Wednesday, February 2, 2011

Manage Xen by libvirt tools

libvirt is an open source API, daemon and management tool for managing platform virtualization
It is much easier to use than the xen native tools for VM creation, network management and storage management
Pros:
- Standard, easy and neat commands to manage VM creation, network management and storage management.
- Support all well known hypervisors (Linux KVM, Xen, VMware ESX, OpenVZ..), so the knowledge is transferable.
- Remote management with TLS encryption and Kerberos authentication.
- API bindings for multiple languages: Python,Perl,Ruby, Java, OCaml , C#, and PHP
- Operation isolation: Stopping libvirt (version> 0.6.0) daemon won't affect VM
Cons:
- Libvirt couldn’t keep up with the development of the underlying hypervisor, so it might not be able understand new features in hypervisor.
- An additional layer of management introduces availability and security concerns. Although stopping libvirt daemon won’t affect VM, but if libvrit daemon fails upon hypervisor reboot. The network bridge managed by libvirt won’t be created. But it can be quickly remedied by simple command:
$brctl addbr br-name; ifconfig br-name IP up
Where does libvirt save the VM configuration file?
It depends on the hypervisor. For Xen, libvirt use Xen API to save it to xenstore(/var/lib/xenstored). Because xenstore is Xen component, that is why VM native tools can start VM without libvirt daemon.
The following stript can be used to examine the VM configuration.
#!/bin/sh
function dumpkey() {
local param=${1}
local key
local result
result=$(xenstore-list ${param})
if [ "${result}" != "" ] ; then
for key in ${result} ; do dumpkey ${param}/${key} ; done
else
echo -n ${param}'='
xenstore-read ${param}
fi
}
for key in /vm /local/domain /tool ; do dumpkey ${key} ; done
Install libvirt on Debian
$apt-get install libvirt-bin virtinst
Enable xend-unix-server in xend to talk to libvirt
$ grep xend-unix-server /etc/xen/xend-config.sxp
(xend-unix-server yes)

$/etc/init.d/xend restart
Define new network bridge
root@xen4:/etc/xen# cat /tmp/net.xml
<network>
<name>private</name>
<bridge name="virbr2" />
<ip address="192.168.152.1" netmask="255.255.255.0">
</ip>
</network>
Type “virsh” to enter an virsh interactive prompt
virsh # net-define /tmp/net.xml
Network private defined from /tmp/net.xml
virsh # net-autostart  private
Network private marked as autostarted
virsh # net-start  private
Network private started
virsh # net-list
Name                 State      Autostart
-----------------------------------------
private              active     yes
root@xen4:/# ifconfig  virbr2
virbr2    Link encap:Ethernet  HWaddr de:49:4e:43:c5:5d
inet addr:192.168.152.1  Bcast:192.168.152.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
Install Centos para-virtualization guest.
#Prepare sparse disk file with qemu-img tool
$qemu-img create -f raw /data/pv2.raw 2G
Para virtualization guest can’t use cdrom as install source, In this example, I mount the ISO file to a web server directory.
$virt-install \
--paravirt \
--name pv2 \
--ram 256 \
--disk path=/data/pv2.raw,size=2,format=raw \
--os-type=linux --os-variant=rhel5.4 \
--nographics \
--network network=private \
--location http://192.168.152.1/pkgs/
After the VM has been created, you can use xen native tool /usr/sbin/xm or libvirt virsh command to start/stop VM. But any configuration change require the virsh edit commands (edit, net-edit, pool-edit vol-edit, iface-edit)