|
|
Question : Should I enable or disable TCP Offload in my Hyper-V R2 environment?
|
|
|
|
There is quite a bit of information on this online, however I am still unable to completely determine if TCP Offload should be disabled or unabled... Here's a quick rundown of my setup.
Hardware Servers: Dell PowerEdge R710 NICs: Broadcom BCM5709
Two host machines are setup in a failover cluster. Each host has 8 NICs. 1 NIC is dedicated to CSV traffic. 1 NIC is dedicated to Live Migration traffice. 2 NICs are for iSCSI communications. And 2 NICs are used by Hyper-V for LAN connectivity. The 8th NIC is unused. SAN is a iSCSI Dell Equallogic. 2 stacked Power Connect 6224 switches.
This is what has been done on the NICs at this point:
-Installed the Broadcom drivers and BASP utility on the hosts
-Disabled NetBios on the iSCSI, LM and CSV NICs.
-Enabled Jumbo Frames on the iSCSI, LM and CSV NICs.
-Enabled Flow Control on the iSCSI NICs.
At this point I haven't touched TCP Offloading, but it appears to be enabled by default on ALL NICs. I wouldn't say I'm seeing issues with it enabled, but I'd like to know if there will be improvement if I disable it. Seems that there is a lot of info out there, some saying enable, some saying disable. Can anyone she some light on this?
Thanks
|
|
|
|
Answer : Should I enable or disable TCP Offload in my Hyper-V R2 environment?
|
|
By disabling TCP Offloading your telling the TOE NIC(s) you've dedicated to the iSCSI network to use the CPU to encapsulate and de-encapsulate the iSCSI packets. This may not be a big deal if your CPU utilization is very low and your packet transmit rate is moderate or less. If I remember correctly high iSCSI packet rates would increase CPU utilization by as much as 20%. If don't have any technical reasons to turn it off I would leave it enabled.
|
|
|
|