PSBM5 combined network throughput limited to 50Mbps

Discussion in 'Performance Questions' started by CodyR, Sep 8, 2014.

  1. CodyR

    CodyR Bit Poster

    Messages:
    3
    ##DESCRIPTION OF PROBLEM##

    The maximum combined throughput of all interfaces is ~6.6MBps.

    ##HW SETUP##
    I have 2 PSBM5 machines.
    Each has:
    2x 1GE (eth0, eth1)
    2x 10GE (eth2, eth3).

    Eth0 1GE management (via brocade 6450) (embedded)
    Eth1 1GE direct to server (embedded)
    Eth2 Connected to Brocade 6450 (Card)
    Eth3 Connected to Cisco 2960 (Card)

    Both units are server class hardware
    (supermicro, Dual Xeon, >64G ECC RDIMM and hardware raid)

    ##NET SETUP##
    eth0 is an untagged management interface, connected at 1GE.
    eth1 - eth3 are tagged vlan trunks configured with vznetcfg

    Each of those vlan trunk interfaces has a dedicated subinterface that is not attached to any parallels "Network".
    They have a private IP address that I used for testing.

    For example eth1.172, eth2.172, and eth3.172:
    172 is for testing, 173 is a network with virtual machines connected.


    ##SPEED TESTING METHODS##
    I am generating traffic using netcat speed test method described here:
    http://kb.odin.com/en/115348

    I use 'pstat -n -c M' to verify and monitor current throughput

    I have also used rsync at varius times to spotcheck throughput.


    ##DETAILS OF PROBLEM##

    I assumed that the problem was at first buffers, so I followed the procedures in:
    http://kb.odin.com/en/111197

    The speeds were not changed at all by modifying the buffers, so I started investigating and troubleshooting and realized the problem was happening on both 1GE AND 10GE interfaces, and that the speed on a 1GE effected the speed on the 10GE.

    For example,
    If I start a speedtest on eth0 <-> eth0, I get ~6.6MBps
    If I then start a second speed test between any other interface, both tests drop to 3.3Mbps.

    So that rules out PCI bandwidth bottleneck

    I was thinking it may be host generated traffic that is throttled, so I started a file copy between 2 virtual machines. It is also throttled to almost exactly 6.6Mbps and speed will drop as I start tests on other interfaces.

    Everything about rate limiting I see in the manual is related to containers, but I do not have any containers running.
    In vz.conf I have the following, which is all I can find:

    Even if traffic shaping was set to yes, why would TOTALRATE be spread across interfaces?

    Is there another configuration for this somewhere that I am missing to open up bandwidth?


    Thanks,

    cody
     
  2. KonstantinB

    KonstantinB Odin Team

    Messages:
    68
    Hi Cody,

    Have you tested network speed between nodes before vlan creation?

    I could also suggest to boot both nodes into stock RHEL6/Centos6 kernel and test network speed with and without VLANs there to separate software issues from hardware ones.

    Best regards,
    Konstantin
     

Share This Page