Cisco Nexus 1000V is a virtual switch running Cisco NX-OS Software, it is similar to vSphere Distributed Switch.
The Cisco Nexus 1000V has two main components:
- Virtual supervisor module (VSM)
A VM running in vSphere ESXi server (a standalone ESXi server or a shared ESXi server hosting both VSM and VEM)
Provides CLI interface for managing Nexus 1000v switch
Controls multiple VEM as a single network device
- Virtual Ethernet module (VEM)
An addon module to be installed on ESXi hypervisor, which controls vem daemons.
A kind of vsphere distributed switch
Independent of VSM in terms of operation, if VSM fails, VEM continues continue to forward
traffic, even its parts of configuration can be managed by vemcmd.
Cisco N1000V specific traffic types:
- Management traffic: Traffic for the VSM management interface and for VMware vCenter Server falls into
this category. VMware vCenter Server requires access to the VMware ESX management interface to
monitor and configure the VMware ESX host. Management traffic usually has low bandwidth requirements,
but it should be treated as high-priority traffic
- Control traffic: Control traffic is generated by the Cisco Nexus 1000V Series and exchanged between the
primary and secondary VSMs as well as between the VSMs and VEMs. It requires little bandwidth (less
than 7 MB) but demands absolute priority.
- Packet traffic: Packet traffic transports selected packets to the VSM for processing. The bandwidth required
for the packet interface is extremely low, and its use is intermittent. If the Cisco Discovery Protocol and
IGMP features are turned off, there is no packet traffic at all.
- System vlan: system vlan enables VEM to forward traffic even when the communication to VSM is lost. The system VLAN is mandatory for above 3 types of traffic and VMware management interface, it is also recommended for other vmkernel traffic e.g. vMotion,ISCSI/NFS.
Cisco n1000V requirements:
vSphere ESX/ESXi 4 or higher (check the Compatibility guide in Cisco website for details)
vSphere ESX/ESXi must has enterprise plus license (n100v is a kind of distributed Switch)
vSphere vCenter (VSM needs to install plugin into vCenter)
ESX/ESXi host must have at least 2 NICs, if you plan to install both VSM and VEM in the same host.
N1000V VM must use thick disk type and network interface must use E1000
Cisco n1000v pros and cons
- Pros:
Because Cisco n1000v running Cisco NX-OS, it offers additional features over vSphere distributed switch.
Central configuration by NX-OS CLI, just feel like a physical switch. E.g all live VMs’s interface can be seen in “show run” , an access list can be applied in the interface.
True end to end QOS, apart from being allocated specific bandwidth by policy-map, traffic leaving Cisco n1000v is marked up with DSCP value, which the upstream Cisco switch understands.VMware NetIOC offer bandwidth allocation by “shares”, but it is local to VMware hypervisor only. (update: VMware,VMware 5 supports IEEE 802.1p CoS value, which makes end to end QOS possible)
True LACP port-channel, VMware doesn’t support LACP bonding without Cisco N1000v
- Cons:
Additional license cost
Unlike distributed switch, which is VMware built-in, VSM is third party VM. Even VSM supports HA and VSM failure doesn’t stop VEM from functioning. Without VSM, it is impossible to make configuration change.
Cisco n1000v deployment procedures
1) Download software
Download free trial of ESXi host and vCenter from VMware site, Download free trial of N1000V from Cisco website.(check the compatible guide to determine the exact versions)
In my test, I used
- VMware ESXi 5.0.0 build-474610
- VMware vCenter Server 5.0.0 build-623373
- Cisco N1000v version 4.2(1)SV1(5.1)
NOTE: N1000V installation procedure may vary with different versions, the following is for verson 4.2(1)SV1(5.1).
2) Install ESXi and vCenter
Create 3 port-groups for management, control and packet traffic of N1000V (You can create separate VLAN for each type, but sharing the same VLAN as management interface of ESXi host is sufficient , so all three port-groups use the same VLAN ID)
3) Install N1000V VSM
Unzip the downloaded Cisco N1000V, connect to vCenter, deploy the OVA file located in “Nexus1000v.4.2.1.SV1.5.1\VSM\Install”. Follow the wizard to deploy the OVA (NOTE: use the default setting, disk type must use thick provisioning, NIC must use E1000)
Start up N1000V VM, login to console with the credentials supplied earlier, run “setup” to start the wizard to configure n1000v. most options are self explanatory, the following are noted
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended.
But N1000V is layer 2 device, ip routing is not supported)
Configure Advanced IP options (yes/no)? [n]: no
Enable the ssh service? (yes/no) [y]: yes
Configure svs domain parameters? (yes/no) [y]: yes
Enter SVS Control mode (L2 / L3) : L3
#(VEM-VSM communication can operate in Layer 2 mode or Layer 3 mode, L3 is recommended.
But N1000V is layer 2 device, ip routing is not supported)
4) Establish connection between VSM and vCenter
#Launch the JAVA installer
C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VC
#Following the wizard to establish connection between VSM and vCenter, the result is that a new distributed switch will be created in “networking “ view in vCenter.
Verify the connection in N1000V CLI:
nv1> show svs connections
….
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.0.0 build-623373
vc-uuid: F1D0CEBA-C365-4F55-830D-A0B9BB6F8520
#Don’t continue to the next step without a successful connection is seen.
5) Install N1000V VEM
C:\Nexus1000v.4.2.1.SV1.5.1\VSM\Installer_App>java -jar Nexus1000V-install.jar VEM
#Launch the Java installer, which will connect to vCenter and VSM to push VEM module to ESX host.
#You can also install the VEM module manually.
C:\Nexus1000v.4.2.1.SV1.5.1\VEM\cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib
#Transfer the file to ESXi 5 host and run
ESXi>esxcli software vib install -v /tmp/cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib
6) Establish connection between VEM and VSM
#Because VEM and VSM are on the host, you need to put VEM VSM into separate physical NICs in order to migrating from standard switch to N1000V switch.
#Create port profile on N1000V
#create an uplink profile to be linked to ESXi host pNIC
# note the type is Ethernet and switchport is trunk
port-profile type ethernet vm-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 3-10
no shutdown
system vlan 3,5-6
state enabled
#Create port-group for vmkernel management interface.
#note the type is venthernet and “capability l3control” and system vlan are mandatory
port-profile type vethernet L3vmkernel
capability l3control
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
# the following 3 profiles for N1000V port-groups, system vlan is mandatory
port-profile type vethernet ds_ctl
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_mgt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
port-profile type vethernet ds_pkt
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
#the following is for generic VM traffic, vCentre VM will be migrated to this in the first state.
#system vlan is optional
port-profile type vethernet vmdata
vmware port-group
switchport mode access
switchport access vlan 3
no shutdown
system vlan 3
state enabled
After successfully execution of above commands, 5 objects will be created under the switch in “networking” view.
Right click the switch and select “add host”, in select physical adapter, select “vmnic1” in uplink group select “vm-uplink”
Don’t migrate vmk0 yet.
In migrate virtual machine networking, migrate N1000v from standard switch to its N1000V switch , migrate vCentre VM to “vmdata” group.
Make sure N1000V VM and vCenre VM were migrated successfully by checking that “show svs connections”
Next step is to migrate vmk0.
Click the N1000V switch, in configuration tab, select “manage hosts”, select migrate vmk0 to “L3vmkernel “ portgroup, now everything has been migrated from vmnic0 to vmnic1, vmnic0 is spare, you can create port-channel in N1000V and then migrate vmnic0 as well.
Only after a successful migration of vmk0, VEM to VSM connection is established and VEM module is seen in Switch
nv1> show module
Mod Ports Module-Type Model Status
--- ----- -------------------------------- ------------------ ------------
1 0 Virtual Supervisor Module Nexus1000V active *
3 248 Virtual Ethernet Module NA ok
Other notes:
Unlike VMware standard switch, port-groups “grow” under a switch, which, in turn, is linked to pNIC, so there is a clear one to one mapping relationship between port-group and pNIC. In N1000V switch, the relationship between port-group and pNIC is in line with VLAN relationship.
Unlike VMware standard switch,port-group can be tagged for “vMotion” traffic in GUI directly, You have to this for N1000V in following steps
Switch to “hosts and clusters” view, click the host in management tab, click “vSphere Distributed Switch” view, click “manage virtual adapters” add a vmkernel adapter and in connection setting select “use this adapter for vMotion”.
#Sample QOS configuration bound for a port-group
policy-map type qos po-vmdata
class class-default
police cir 1500 mbps bc 200 ms conform transmit violate set dscp dscp table pir-markdown-map
port-profile type vethernet vmdata
service-policy input po-vmdata
service-policy output po-vmdata
#Control VEM in ESXi host directly when VSM is not available.
#unblock a port by defining correct system vlan
ESXi > vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port Type
49 DOWN UP BLK 0 testlinux.eth0
ESXi > vemset system-vlan 3 ltl 49
ESXi > vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port Type
49 UP UP FWD 0 testlinux.eth0
Cisco Nexus 1000V Series Switches download and document links
http://www.cisco.com/en/US/products/ps9902/index.html