Customer is using bonded interfaces in the Titanium server, that a linked to HW vendor switches with 10Gb/Sec links. Customer reported a number of issues with those interfaces that he was unable to resolve. The issues included 0 Mbps reported link speed. Example: 2015-09-23T12:54:56.643 compute-0 kernel: info bonding: bond0: link status definitely down for interface eth0, disabling it 2015-09-23T12:55:01.771 compute-0 kernel: info ixgbe 0000:01:00.0 eth0: NIC Link is Up 10 Gbps, Flow Control: None 2015-09-23T12:55:01.843 compute-0 kernel: info bonding: bond0: link status definitely up for interface eth0, 0 Mbps full duplex. Also non-activate links on bonded interfaces of the linux side were reported. WR attempted to reproduce this problem in our own environment ( In Kista lab), however our own setups behaved correctly. Customer contacted HW vendor and after investigation they'd concluded that the problem is with LACP protocol on WR host Linux side. Manual run of "ethtool -r" according to customer was able to remedy link problem