Sunday, September 12, 2010

Configure Jumbo frame for VMware ESX 3.5 and ESX 4.x

Vmware KB Article: 1007654 iSCSI and Jumbo Frames configuration on ESX 3.x and 4.x  may lead you to think that enabling jumbo frame on vmkernel interface is required. Actually, it depends on what you want to achieve. 
1) If you want to place VMs on NFS/ISCSI data-store, enable jumbo frame for ESX Host's vmkernel interface and vSwitch, where vmkernel port group reside.
2) if you want  guest OS to connect NFS/ISCSI server, enable jumbo frame on guest OS's network interface and vSwitch, where guest OS port group reside.

Please notes the following:
    Jumbo frames need to be enabled end to end [OS level - ESX Virtual switch - physical switch - peer host ]
    For ESX 3.5/4.x , jumbo frames are supported in both the guest operating system and in the ESX kernel TCP/IP stack.  (Jumbo Frames on software iSCSI  for ESX 3.x is only experimental)
    ESX 3.5 doesn't allow update MTU on vSwitch, the vSwitch has to be re-created.
    ESX 4 allow update MTU on vSwitch on the fly.

Example #1 for ESX 4: changing MTU to 9000 for vmkernel interface, to which NFS/ISCSI datastore port group linked.
####Shutdown VMs in current ESX host or Migrating running VM to other hosts 
####Write down current information: port group name/ip address/network mask 
$ esxcfg-vmknic -l
Interface  Port Group/DVPort   IP Family IP Address                              Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type
vmk1       esx_nfs      IPv4                    00:50:56:71:48:18 1500    65535     true    STATIC
#### Delete current port group (Can't change MTU on existing vmkernel interface )
esxcfg-vmknic -d esx_nfs
####Re-create Vmkernel interface with mtu 9000 and update Vswitch, where esx_nfs resides
esxcfg-vmknic -a -i -n -m 9000  esx_nfs  
esxcfg-vswitch -m 9000  vSwitch2
####Test by vmkping
vmkping -s 8900  NFS-Server-IP 
variable overhead is added to payload, it is safe to start with 8900 byte
normal "ping" command is for testing connectivity for ESX console network
if vmkping fails,reboot ESX host and check NFS server has jumbo frame enabled as well.

Example #2 for ESX 4: Changing MTU to 9000 for Linux Guest OS on normal data networking.

####Find the vSwitch name
esxcfg-vswitch -l 
####Change MTU for the switch, the vmnicX on ESX host will be changed automatically
esxcfg-vswitch -m 9000  vSwitch3
####Change MTU on Linux Guest OS
Append  "MTU=9000"  to Linux NIC configure file: /etc/sysconfig/network-scripts/ifcfg-ethX
restart the interface ifdown ethX  ; ifup ethX
####Test by ping with don't fragment flag (-M do ) set, Without it, "ping -s 8900" might be successful even on MTU=1500.  
$ping -M do -s 8900 NFS-Server-IP
variable overhead is added to payload, it is safe to start with 8900 byte

No comments:

Post a Comment