Thursday, November 29, 2012

Authenticate RHEL 6 Linux users by Windows 2008 R2 AD

The nss_ldap in RHEL 5 for LDAP authentication has been obsolete in RHEL 6, the replacement is nss-pam-ldapd. But the preferred method for LDAP authentication in RHEL 6 is System Security Services Daemon (SSSD) (in fact, RHEL 5.6 or later supports SSSD ).
SSSD unique  features:
- Credentials caching, user can still login when LDAP server is offline.
- Persistent connection, reducing the overhead of opening a new socket for each request
- support for multiple LDAP/NIS domains

Install SSSD packages

$yum install sssd sssd-client


Run the following command, which will make necessary changes in /etc/krb5.conf, /etc/sssd/sssd.conf, /etc/nsswitch.conf, /etc/pam.d/



$authconfig --enablesssd --ldapserver=ldap://adc.ad.example.com --ldapbasedn="OU=_USERS,DC=ad,DC=example,DC=com" --enablerfc2307bis --enablesssdauth --krb5kdc=adc.ad.example.com --krb5realm=AD.EXAMPLE.COM --disableforcelegacy --enablelocauthorize --enablemkhomedir   --updateall


All files should be updated automatically, only /etc/sssd/sssd.conf need to be customized. The following is an example file with minimum parameters needed.



#cat /etc/sssd/sssd.conf
[sssd]
config_file_version = 2
services = nss, pam
domains = default
[nss]
#debug_level=7
[pam]
[domain/default]
ldap_id_use_start_tls = False
cache_credentials = True
#Without enumerate = True,  users won’t be show from ‘getent passwd’ output.
enumerate = True
id_provider = ldap
auth_provider = krb5
chpass_provider = krb5
ldap_schema = rfc2307bis
ldap_force_upper_case_realm = True
ldap_user_object_class = user
ldap_group_object_class = group
ldap_user_gecos = displayName
ldap_user_home_directory = unixHomeDirectory
ldap_uri = ldap://adc.ad.example.com
ldap_search_base = OU=_USERS,DC=ad,DC=example,DC=com
ldap_user_search_base = OU=_USERS,DC=ad,DC=example,DC=com
ldap_group_search_base = OU=_GROUPS,DC=ad,DC=example,DC=com
ldap_default_bind_dn = CN=svc_ldap_client,OU=MGT,OU=_USERS,DC=ad,DC=example,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = P@ss123
ldap_tls_cacertdir = /etc/openldap/cacerts
krb5_server = adc.ad.example.com
krb5_kpasswd = adc.ad.example.com
krb5_realm = AD.EXAMPLE.COM

Authenticate RHEL 5 Linux users by Windows 2008 R2 AD

My previous post was tested on Windows 2003 AD, which use non-RFC compliant scheme, Windows 2003 R2 or later is RFC2307bis compliant, the following is tested on Windows 2008 R2, but it should be working for Windows 2003 R2 and Windows 2008 as well.
The following use nss_ldap to do AD authentication by Kerberos, if you use RHEl 5.6 or later, you may consider  System Security Services Daemon (SSSD), which offer many great features.
 Setup Windows AD
Windows 2008 R2 AD has built-in component to perform the same function of “Windows services for Unix” in Windows 2003, it is named “identity management for Unix”
Install “identity management for Unix” by clicking “Add Role Services” under role of “Active Directory Domain services”. Choose all three sub-components in identity management for Unix.(note: QLOGIC SANsurfer software conflicts with RPC services, remove it before install identity management for Unix)
Setup ldapbind user, create test user, test group and set Unix attributes as the previous post
Setup configuration files
1.    Configure /etc/ldap.conf
nss_map_attribute in Windows 2008 R2 is different to Windows 2003, the following is a sample file
#cat /etc/ldap.conf
base OU=_USERS,DC=AD,DC=example,DC=com
BINDDN CN=svc_ldap_client,OU=MGT,OU=_USERS,DC=AD,DC=example,DC=com
BINDPW Pass123

timelimit 60
bind_timelimit 10
#by default, if ldap server is not reachable, it will retry long time before giving up,
# nss_reconnect_tries 1 limit it to be less than a minute. 
nss_reconnect_tries 1
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup  group
nss_map_attribute uid sAMAccountName
#nss_map_attribute uidNumber uidNumber
#nss_map_attribute gidNumber gidNumber
nss_map_attribute gecos         name
nss_map_attribute homeDirectory unixHomeDirectory
#nss_map_attribute loginShell loginShell
nss_map_attribute shadowLastChange pwdLastSet
nss_base_password OU=_USERS,DC=AD,DC=example,DC=com
nss_base_shadow OU=_USERS,DC=AD,DC=example,DC=com
nss_base_group OU=_GROUPS, DC=AD,DC=example,DC=com
pam_login_attribute sAMAccountName
pam_filter objectclass=User
pam_password ad
nss_initgroups_ignoreusers root,ldap,named,avahi,haldaemon,dbus,radvd,tomcat,radiusd,news,mailman,nscd,gdm
#adc.ad.example.com is alias DNS name load balanced to DCs
uri ldap://adc.ad.example.com
ssl no
tls_cacertdir /etc/openldap/cacerts

2.Configure /etc/krb5.conf, no difference to Windows 2003 AD







#cat /etc/krb5.conf 
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log
[libdefaults]
 default_realm =   AD.EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 forwardable = yes
[realms]
   AD.EXAMPLE.COM = {
  kdc = adc.ad.example.com
  admin_server=adc.ad.example.com
 }
[domain_realm]
 example.com = AD.EXAMPLE.COM
 .example.com = AD.EXAMPLE.COM
[appdefaults]
 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
 }

3.You also need to modify files in /etc/pam.d/ /etc/nsswitch.conf, once you have copied the ldap.conf and krb5.conf files, run the following command to automate the tasks.




$authconfig --enablecache  --enableldap --usemd5 --useshadow  --enablelocauthorize --enablekrb5  --enablemkhomedir  --update

Wednesday, November 21, 2012

Enable Windows Active Directory Authentication in vSphere 5.1.

vSphere Single Sign On is a new feature in vSphere 5.1, vSphere SSO controls authentication service, so you  can no longer  add new authentication provider in vCenter by standard vSphere client. It has to be done in vSphere webclient, which can talk to vSphere SSO service.

Steps to add Windows Active Directory provider.
1.Create a generic user in AD for LDAP search, define user and group base DN.
2.Install Webclient from vCenter installation media, just like vSphere Client, it doen’t need to be installed on vCenter server.
3.Launch Webclient https://client-ip:9443/vsphere-client and login
The account used for login is important, if you installed SSO service when login with local account, local account can  login Webclient, But it doesn’t  have permission to configure SSO, you have to login with the default SSO account “admin@System-Domain” created during installation.
4.Navigate to Administration/Sign-on and Discovery/configuraiton( the configurion node won’t be shown, if login with local Windows account), and add “+” sign to add identity sources.
The login credentials will be sent in clear text with ldap, if it is a concern, enable ldaps by creating certificate
The username should be in LDAP syntax, find the exact string in ADSI edit tool in AD.

clip_image002[4]

Wednesday, September 19, 2012

Detect increased new size of existing LUN in RHEL by rescanning FC port without reboot

If the usual command :

echo “- - -“ > /sys/class/scsi_host/hostX/scan

doesn’t work for FC target, you can try this:

echo 1 > /sys/devices/{PCI-DEVICE-ID} /rescan 

The path is the device path to the FC target,which may have multiple paths. The scsi-rescan(rescan-scsi-bus.sh) tool in sg3_utils worked great for new LUN but it couldn’t detect the new size of existing LUN neither. The following procedure was tested in RHEL 6.3.



#Find the PCI ID of your device
>lspci | grep -i qlogic
15:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
1a:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
#Find the file rescan in /sys/devices by filtering PCI ids found above
>find /sys/devices  -name rescan  | egrep '15:00|1a:00'
/sys/devices/pci0000:00/0000:00:03.0/0000:15:00.0/rescan
/sys/devices/pci0000:00/0000:00:03.0/0000:15:00.0/host1/rport-1:0-0/target1:0:0/1:0:0:0/rescan
/sys/devices/pci0000:00/0000:00:03.0/0000:15:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:0/rescan
/sys/devices/pci0000:00/0000:00:07.0/0000:1a:00.0/rescan
/sys/devices/pci0000:00/0000:00:07.0/0000:1a:00.0/host2/rport-2:0-0/target2:0:0/2:0:0:0/rescan
/sys/devices/pci0000:00/0000:00:07.0/0000:1a:00.0/host2/rport-2:0-1/target2:0:1/2:0:1:0/rescan
#kick off rescan by updating rescan file in each port
echo 1 > /sys/devices/pci0000:00/0000:00:03.0/0000:15:00.0/host1/rport-1:0-0/target1:0:0/1:0:0:0/rescan
echo 1 > /sys/devices/pci0000:00/0000:00:03.0/0000:15:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:0/rescan
echo 1 > /sys/devices/pci0000:00/0000:00:07.0/0000:1a:00.0/host2/rport-2:0-0/target2:0:0/2:0:0:0/rescan
echo 1 > /sys/devices/pci0000:00/0000:00:07.0/0000:1a:00.0/host2/rport-2:0-1/target2:0:1/2:0:1:0/rescan
# messages log  file show that  the new size was detected.
>tail -f /var/log/messages
Sep 19 09:56:10 server1 kernel: sd 1:0:1:0: [sdc] 12884901888 512-byte logical blocks: (6.59 TB/6.00 TiB)
Sep 19 09:56:10 server1 kernel: sdc: detected capacity change from 5497558138880 to 6597069766656
Sep 19 10:05:57 server1 kernel: sd 1:0:0:0: [sdb] 15032385536 512-byte logical blocks: (7.69 TB/7.00 TiB)
Sep 19 10:05:57 server1 kernel: sdb: detected capacity change from 6597069766656 to 7696581394432
Sep 19 10:05:57 server1 kernel: sd 1:0:1:0: [sdc] 15032385536 512-byte logical blocks: (7.69 TB/7.00 TiB)
Sep 19 10:05:57 server1 kernel: sdc: detected capacity change from 6597069766656 to 7696581394432
Sep 19 10:05:57 server1 kernel: sd 2:0:0:0: [sdd] 15032385536 512-byte logical blocks: (7.69 TB/7.00 TiB)
Sep 19 10:05:57 server1 kernel: sdd: detected capacity change from 6597069766656 to 7696581394432
Sep 19 10:05:58 server1 kernel: sd 2:0:1:0: [sde] 15032385536 512-byte logical blocks: (7.69 TB/7.00 TiB)
Sep 19 10:05:58 server1 kernel: sde: detected capacity change from 6597069766656 to 7696581394432

Monday, September 3, 2012

Create GPT partition for LVM using parted tool

Traditional MBR(MSDOS) disk label has limitation of 2^32 (2TiB) in capacity and 15 in partition numbers(including logical partitions), while GUID Partition Table (GPT) supports 2^64 KiB (2 ZiB) and 128 partitions by default.

In Linux, fdisk doesn’t support GPT, parted is the common built-in tool for GPT.

#mpathb is the disk name is in FC SAN with multipath enabled in my test env
>parted  /dev/mapper/mpathb
(parted) mklabel gpt
(parted) mkpart primary ext4 1024kb 2tb
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel?
#This warning indicates the start position of the partition may not aligned with physical sector of the #hard disk. It is very important for harddisk of hardware raid, the start position must be n*stripe size.
#see also: http://honglus.blogspot.com.au/2009/08/align-partitions-on-stripe-boundary-for.html
#It may also hold true for single hard disk, because even a single harddisk has sector size of 2K,4K #nowadays .
#To fix the issue, just change the unit from SI to IEC 60027-2 standard
# k- stands for kilo, meaning 1,000 in Metric(SI) Prefix
# ki- stands for kilobinary ("kibi-"), meaning 1,024 in IEC 60027-2 standard
(parted) help unit
  unit UNIT                                set the default unit to UNIT
        UNIT is one of: s, B, kB, MB, GB, TB, compact, cyl, chs, %, kiB, MiB, GiB, TiB
(parted) mkpart primary ext4 1024KiB 8TiB
#the values are accepted without any warning
(parted) print
..
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  8796GB  8796GB               primary
#1049KB is shown, because the default unit is KB, we change it to KiB
(parted) unit KiB
(parted) print
..
Number  Start    End            Size           File system  Name     Flags
 1      1024kiB  8589934592kiB  8589933568kiB               primary 
#set  LVM flag
#GPT has  reserved GUID for different partitions e.g LVM= E6D6D379-F507-44C2-A23C-238F2A3DF928
#
(parted) set 1 lvm on
(parted) p
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/mpathb: 19527106560kiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start    End            Size           File system  Name     Flags
 1      1024kiB  8589934592kiB  8589933568kiB               primary  lvm
#create LVM physical volume as usual.
>pvcreate /dev/mapper/mpathb1

Saturday, August 11, 2012

Clone Windows 2008 R2 on UEFI based Servers

Unified Extensible Firmware Interface (UEFI) technology has been widely adopted by x86 servers hardware manufactures such as IBM and DELL to supersede BIOS, but it presents a challenge for operating system cloning, because the boot code is in efi files not MBR, and even efi files are replicated by file copy or sector copy, the files need to be updated for new hardware, the following example demonstrate cloning Windows 2008 R2 by imagex tool in IBM system X servers.

In source computer, capture the two partitions after booting from winPE

#capture Windows system partition after being syspreped.
$imagex /compress fast  /capture c:  $networkshare:\boot.wim "w2k8 R2 64bit"
#capture EFT system partition
#driver letter S: was assigned by diskpart command: diskpart; select disk 0; list volume; select volume 2; assign letter=s:
$imagex /compress fast  /append  s:  $networkshare:\boot.wim "w2k8 R2 64bit eftsys"


In target computer, create 3 mandatory partitions after booting from winPE



#MSR partition is required for partition operations like converting to dynamic disk, encrypting partition.
#“diskpart  /s diskpart.txt”
$type diskpart.txt
select disk 0
clean
convert gpt
rem == 1. System partition =========================
create partition efi size=100
format quick fs=fat32 label="System"
select partition 1
assign letter="s"
rem == 2. Microsoft Reserved (MSR) partition =======
create partition msr size=128
rem == 3. Microsoft Windows partition =======
create partition primary size=102400
select partition 3
format quick fs=ntfs label="Windows"
assign letter="c"


# In target computer, apply the images



imagex /apply  $networkshare\boot.wim 1 c: && imagex /apply $networkshare boot.wim 2 s:  


In target computer, update BCD store in EFI system partition



bcdedit -store S:\EFI\Microsoft\Boot\BCD /set {bootmgr} device partition=s:
bcdedit -store S:\EFI\Microsoft\Boot\BCD /set {memdiag} device partition=s:
bcdedit -store S:\EFI\Microsoft\Boot\BCD /set {default} device partition=c:
bcdedit -store S:\EFI\Microsoft\Boot\BCD /set {default} osdevice partition=c:
bcdedit -store S:\EFI\Microsoft\Boot\BCD /set {fwbootmgr} displayorder {bootmgr} /addfirst 


Reboot target computer, press F1 to go to BIOS setup

Select the boot file by navigating to


boot manager -> boot from file ->EFI->boot->bootx64.efi


(This file is actually S:\EFI\Microsoft\Boot\bootx64.efi)


After the file(script) is selected, it will call s:\EFI\Microsoft\Boot\bootmgfw.efi which, in turn, will call c:\Windows\system32\winload.efi to boot Windows, Additionally , a new boot entry named “Windows boot manager” will be added as new boot option to the top of boot list in EFI BIOS. So next time, it will boot to Windows automatically without manual intervention.

Friday, July 6, 2012

Setup VMware vCenter 5 to use Oracle 11g R2 database

VMware vCenter supports DB2,Oracle or MS SQL server as backend database, the built-in database for vCenter in Windows is SQL Server 2008 express,which has limits on disk space and memory, it is not suitable for enterprise. For enterprise deployment, it is recommended to use proper database engine, such as Oracle 11g R2
Install Oracle Database

Select a compatible version of Oracle listed in VMware website.
http://www.vmware.com/resources/compatibility/sim/interop_matrix.php

The Versions used in this test:
   - vCentre 5.0.0 build 623373
   - Oracle 11g R2 11.2.0.3.0

Setup Oracle database for vCenter

- Create an Oracle SQL login account for vCenter
Estimate vCentre database tablespace size requirement.
vCenter has a tool to estimate the size.
vCenter->administration->Server setting->statistics
for example, to keep data of 500 VMs for 1 year needs ~5GB storage

old data can be automatically purged by setting up retention policy
vCenter->administration->Server setting->database retention policy

Extract from “vSphere Installation and Setup” document

#1 Log in to a SQL*Plus session with the system account.
#2 Run the following SQL command to create a vCenter Server database user with the correct permissions.
#The script is located in the vCenter Server {installation media}/vcenter/dbschema/DB_and_schema_creation_scripts_oracle.txt file.
#In this example, the user name is VPXADMIN.
CREATE USER "VPXADMIN" PROFILE "DEFAULT" IDENTIFIED BY "oracle" DEFAULT TABLESPACE
"VPX" ACCOUNT UNLOCK;
grant connect to VPXADMIN;
grant resource to VPXADMIN;
grant create view to VPXADMIN;
grant create sequence to VPXADMIN;
grant create table to VPXADMIN;
grant create materialized view to VPXADMIN;
grant execute on dbms_lock to VPXADMIN;
grant execute on dbms_job to VPXADMIN;
grant select on dba_tablespaces to VPXADMIN;
grant select on dba_temp_files to VPXADMIN;
grant select on dba_data_files to VPXADMIN;
grant unlimited tablespace to VPXADMIN;
#By default, the RESOURCE role has the CREATE PROCEDURE, CREATE TABLE, and CREATE
#SEQUENCE privileges assigned. If the RESOURCE role lacks these privileges, grant them to the vCenter
#Server database user.
#NOTE Instead of granting unlimited tablespace, you can set a specific tablespace quota. The
#recommended quota is unlimited with a minimum of at least 500MB. To set an unlimited quota, use the
#following command.
#alter user "VPXADMIN" quota unlimited on "VPX";
#If you set a limited quota, monitor the remaining available tablespace to avoid the following error.
#ORA-01536: space quota exceeded for tablespace '<tablespace>'
#3 (Optional) After you have successfully installed vCenter Server with the Oracle database, you can revoke
#the following privileges.
revoke select on dba_tablespaces from VPXADMIN;
revoke select on dba_temp_files from VPXADMIN;
revoke select on dba_data_files from VPXADMIN;


Prepare Windows server for vCenter



- Install Oracle ODBC client

Download both basic and ODBC client from Oracle website.


instantclient-basic-windows.x64-11.2.0.3.0.zip


instantclient-odbc-windows.x64-11.2.0.3.0.zip



#unzip instantclient-basic-windows.x64-11.2.0.3.0.zip  to: 
C:\Program Files\Oracle\instantclient_11_2
unzip instantclient-odbc-windows.x64-11.2.0.3.0.zip  to the same directory as basic installant client
run odbc_install.exe in command line
C:\Program Files\Oracle\instantclient_11_2>odbc_install.exe
Oracle ODBC Driver is installed successfully. 
mkdir C:\Program Files\Oracle\instantclient_11_2\network\admin
#copy tnsnames.ora  on Oracle server to the directory 
#Add new Windows  system variable "ORACLE_HOME=C:\Program Files\Oracle\instantclient_11_2" 
#system variable take effects immediately, open a new command line to check this: 
C:\ >echo %ORACLE_HOME%
C:\Program Files\Oracle\instantclient_11_2 


open "odbc source" in administrative tools, create "system DSN", select oracle driver, type in service name defined in tnsnames.ora,username etc

Make sure "test connection" result is ok


Install vCenter

Follow the installation Wizard to install vCenter, you may receive warning about the Oracle client need to be updated, select ok to continue

Update  vCenter ojdbc client


the default JDBC client in vCenter maybe old, (you can check its version by renamed ojdbc5.jar to ojdbc5.zip and open it to check meta-inf file)



cd /d "C:\Program Files\VMware\Infrastructure\tomcat\lib\"
#backup original file
copy ojdbc5.jar ojdbc5.jar.orig
#overwrite with new ojdbc5.jar from instant client
copy C:\Program Files\Oracle\instantclient_11_2\ojdbc5.jar   ojdbc5.jar

Monday, June 4, 2012

Upgrade QLogic FC HBA firmware on IBM System x server

I have been researching the method to upgrade device firmware, my aim was to upgrade the firmware before installing operating system and perform the upgrade remotely by IBM RSA console.
For IBM system (IMM/UEFI/FGA/DSA) and IBM peripheral device firmware, IBM ToolsCenter Bootable Media Creator(BoMC) is a great tool to create boot CD, which can upgrade IBM system  and IBM peripheral device firmware automatically.
However, for IBM OEM product like QLogic HBA, Maybe due to bug,BoMC (v9.2) can’t detect the device, you have to use Qlogic DOS flash tool to do the upgrade.

Upgrade Qlogic FC HBA QLE 2560 firmware
1) Visit QLogic website (http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/Product_detail.aspx?oemid=376) to download  firmware package which includes firmware and DOS flash utility.
2) Download DOS boot diskette image(http://www.allbootdisks.com/downloads/Disks/MS-DOS_Boot_Disk_Download47/Diskette Images/Dos3.3.img)
3) Download Winimage software(http://www.winimage.com/) to edit the image by adding Qlogic firmware and flash tools(VPD.EXE, Q25AF232.BIN, update.bat, FLASUTIL.EXE)
4) Attach the diskette image by login to IBM RSA console->remote control->tools->launch virtual media->add image. Click map to upload the image, then mount it.
5) Boot server to DOS from floppy, run update.bat to update firmware

Thursday, May 17, 2012

VMware vDS alternative, Cisco Nexus 1000V quickstart

Cisco Nexus 1000V  is a virtual switch running Cisco NX-OS Software, it is similar to vSphere Distributed Switch.

The Cisco Nexus 1000V has two main components:
  - Virtual supervisor module (VSM)
A VM running in vSphere ESXi server (a standalone ESXi server or a shared ESXi server hosting both VSM and VEM)
Provides CLI interface for managing Nexus 1000v switch
Controls multiple VEM as a single network device
- Virtual Ethernet module (VEM)
An addon module to be installed on ESXi hypervisor, which controls vem daemons.
A kind of vsphere distributed switch
Independent of VSM in terms of operation, if VSM fails, VEM continues continue to forward
traffic, even its parts of configuration  can be managed by vemcmd.
Cisco N1000V specific traffic types:
- Management traffic: Traffic for the VSM management interface and for VMware vCenter Server falls into
this category. VMware vCenter Server requires access to the VMware ESX management interface to
monitor and configure the VMware ESX host. Management traffic usually has low bandwidth requirements,
but it should be treated as high-priority traffic
- Control traffic: Control traffic is generated by the Cisco Nexus 1000V Series and exchanged between the
primary and secondary VSMs as well as between the VSMs and VEMs. It requires little bandwidth (less
than 7 MB) but demands absolute priority.
- Packet traffic: Packet traffic transports selected packets to the VSM for processing. The bandwidth required
for the packet interface is extremely low, and its use is intermittent. If the Cisco Discovery Protocol and
IGMP features are turned off, there is no packet traffic at all.
- System vlan: system vlan enables VEM to forward traffic even when the communication to VSM is lost. The system VLAN is mandatory for above 3 types of traffic and VMware management interface, it is also recommended for other vmkernel traffic e.g. vMotion,ISCSI/NFS.
Cisco n1000V requirements:
vSphere ESX/ESXi 4 or higher (check the Compatibility guide in Cisco website for details)
vSphere ESX/ESXi must has enterprise plus license (n100v is a kind of distributed Switch)
vSphere vCenter (VSM needs to install plugin into vCenter)
ESX/ESXi host must have at least 2 NICs, if you plan to install both VSM and VEM in the same host.
N1000V VM must use thick disk type and network interface must use E1000
Cisco n1000v pros and cons
- Pros:
Because Cisco n1000v running Cisco NX-OS, it offers additional features over vSphere distributed switch.
Central configuration by NX-OS CLI, just feel like a physical switch. E.g all live VMs’s interface can be seen in “show run” , an access list can be applied in the interface.
True end to end QOS, apart from being allocated specific bandwidth by policy-map, traffic leaving Cisco n1000v is marked up with DSCP value, which the upstream Cisco switch understands.VMware NetIOC offer bandwidth allocation by “shares”, but it is local to VMware hypervisor only. (update: VMware,VMware 5 supports IEEE 802.1p CoS value, which makes end to end QOS possible)
True LACP port-channel, VMware doesn’t support LACP bonding without Cisco N1000v
- Cons:
Additional license cost
Unlike distributed switch, which is VMware built-in, VSM is third party VM. Even VSM supports HA and VSM failure doesn’t stop VEM from functioning. Without VSM, it is impossible to make configuration change.

Cisco n1000v deployment procedures

1) Download software
Download free trial of ESXi host and vCenter from VMware site, Download free trial of N1000V from Cisco website.(check the compatible guide to determine the exact versions)
In my test, I used
  - VMware ESXi 5.0.0 build-474610
  - VMware vCenter Server 5.0.0 build-623373
  - Cisco N1000v version 4.2(1)SV1(5.1)
NOTE: N1000V installation procedure may vary with different versions, the following is for verson 4.2(1)SV1(5.1).
2) Install ESXi and vCenter
Create 3 port-groups for management, control and packet traffic of N1000V (You can create separate VLAN for each type, but sharing the same VLAN as management interface of ESXi host  is sufficient , so all three port-groups use the same VLAN ID)
3) Install N1000V VSM
Unzip the downloaded Cisco N1000V, connect to vCenter, deploy the OVA file located in “Nexus1000v.4.2.1.SV1.5.1\VSM\Install”. Follow the wizard to deploy the OVA (NOTE:  use the default setting, disk type must use thick provisioning, NIC must use E1000)
Start up N1000V VM, login to console with the credentials supplied earlier, run “setup” to start the wizard to configure n1000v. most options are self explanatory, the following are noted
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended. 
But N1000V is layer 2 device, ip routing is not supported)
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended. 
But N1000V is layer 2 device, ip routing is not supported)

4) Establish connection between VSM and vCenter

#Launch the JAVA installer
C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VC
#Following the wizard to establish connection between VSM and vCenter, the result is that a new distributed switch will be created in “networking “ view in vCenter.
Verify the connection in N1000V CLI:
nv1> show svs connections
….
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.0.0 build-623373
vc-uuid: F1D0CEBA-C365-4F55-830D-A0B9BB6F8520
#Don’t continue to the next step without a successful connection is seen.


5) Install N1000V VEM


C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VEM
#Launch the Java installer, which will connect to vCenter and VSM to push VEM module to ESX host.
#You can also install the VEM module manually.
C:\Nexus1000v.4.2.1.SV1.5.1\VEM\cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib
#Transfer the file to ESXi 5 host and run 
ESXi>esxcli software vib install -v /tmp/cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib

6) Establish connection between VEM and VSM

#Because VEM and VSM are on the host, you need to put VEM VSM into separate physical NICs in order to migrating from standard switch to N1000V switch.
#Create port profile on N1000V
#create an uplink profile to be linked to ESXi host pNIC
# note the type is Ethernet and switchport is trunk
port-profile type ethernet vm-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 3-10
no shutdown
system vlan 3,5-6
state enabled
#Create port-group for vmkernel management interface.
#note the type is venthernet and “capability l3control” and system vlan are mandatory
port-profile type vethernet L3vmkernel
capability l3control
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
# the following  3 profiles for N1000V port-groups, system vlan is mandatory
port-profile type vethernet ds_ctl
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_mgt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_pkt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
#the following is for generic VM traffic, vCentre VM will be migrated to this in the first state.
#system vlan is optional
port-profile type vethernet vmdata
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled

After successfully execution of above commands, 5 objects will be created under the switch in “networking” view.
Right click the switch and select “add host”, in select physical adapter, select “vmnic1” in uplink group select “vm-uplink”
Don’t migrate vmk0 yet.
In migrate virtual machine networking,  migrate N1000v  from standard switch to its N1000V switch , migrate vCentre VM to “vmdata” group.
Make sure N1000V VM and vCenre VM were migrated successfully by checking that   “show svs connections”

Next step is to migrate vmk0.

Click the N1000V switch, in configuration tab, select “manage hosts”, select migrate vmk0 to “L3vmkernel “ portgroup, now everything has been migrated from vmnic0 to vmnic1, vmnic0 is spare, you can create port-channel in N1000V and then migrate vmnic0 as well.

Only after a successful migration of vmk0, VEM to VSM connection is established and VEM module is seen in Switch

nv1> show module
Mod Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
3    248    Virtual Ethernet Module           NA                  ok


Other notes:

 Unlike VMware standard switch, port-groups “grow” under a switch, which, in turn, is linked to pNIC, so there is a clear one to one mapping relationship between port-group and pNIC. In N1000V switch, the relationship between port-group and pNIC is in line with VLAN relationship.

Unlike VMware standard switch,port-group can be tagged for “vMotion” traffic in GUI directly,  You have to this for N1000V in following steps
 Switch to “hosts and clusters” view, click the host in management tab, click “vSphere Distributed Switch” view, click “manage virtual adapters” add a vmkernel adapter and in connection setting select “use this adapter for vMotion”.


#Sample QOS configuration bound for a port-group
policy-map type qos po-vmdata
class class-default
police cir 1500 mbps bc 200 ms conform transmit violate set dscp dscp table pir-markdown-map
port-profile type vethernet vmdata
service-policy input po-vmdata
service-policy output po-vmdata
#Control VEM in ESXi host directly when VSM is not available.
#unblock a port by defining correct system vlan
ESXi > vemcmd show port
LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
49                DOWN   UP   BLK       0        testlinux.eth0
ESXi > vemset system-vlan 3 ltl 49
ESXi > vemcmd show port
LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
49                UP   UP    FWD       0        testlinux.eth0

Cisco Nexus 1000V Series Switches download and document links
http://www.cisco.com/en/US/products/ps9902/index.html