1. Linux Network Interface Bonding / Trunking or How to Get Beyond 1 Gb/s

    Thu 11 November 2010

    This article discusses Linux bonding and how to achieve 2 Gb/s transfer speeds with a single TCP/UDP connection.

    UPDATE July 2011

    Due to hardware problems, I was not able to achieve transfer speeds beyond 150 MB/s. By replacing a network card with one from another vendor (HP Broadcom) I managed to obtain 220 MB/s which is about 110 MB/s per network interface.

    So I am now able to copy a single file with the 'cp' command over an NFS share with 220 MB/s.


    Update January 2014

    See this new article on how I got 340 MB/s transfer speeds.


    I had problems with a intel e1000e PCIe card in an intel DH67BL. I tested with different e1000e PCIe models but to no avial. RX was 110 MB/s. TX was always no faster than 80 MB/s. A HP Broadcom gave no problems and also provided 110 MB/s for RX traffic. LSCPI output:

    Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express

    The on-board e1000e NIC performed normal, all PCIe e1000e cards with different chipsets never got above 80 MB/s.

    A gigabit network card provides about 110 MB/s (megabytes) of bandwidth. If you want to go faster, the options are:

    1. buy infiniband stuff: I have no experience with it, may be smart thing to do but seems expensive.
    2. buy 10Gigabit network cards: very very expensive compared to other solutions.
    3. strap multiple network interfaces together to get 2 Gb/s or more with more cards.

    This article is discussing the third option. Teaming or bonding two network cards to a single virtual card that provides twice the bandwidth will provide you with that extra performance that you where looking for. But the 64000 dollar question is:

    How to obtain 2 Gb/s with a single transfer? Thus with a single TCP connection?

    Answer: The trick is to use Linux network bonding.

    Most bonding options only provide an accumulated performance of 2 Gb/s, by balancing different network connections over different interfaces. Individual transfers will never reach beyond 1 Gbit/s but it is possible to have two 1 Gb/s transfers going on at the same time.

    That is not what I was looking for. I want to copy a file using NFS and just get more than just 120 MB/s.

    The only bonding mode that supports single TCP or UDP connections to go beyond 1 Gb/s is mode 0: Round Robin. This bonding mode is kinda like RAID 0 over two or more network interfaces.

    However, you cannot use Round Robin with a standard switch. You need an advanced switch that is capable of creating "trunks". A trunk is a virtual network interface, that consists of individual ports that are grouped together". So you cannot use Round Robin mode with an average unmanaged switch. The only other option is to use direct cables between two hosts, although I didn't tested this.

    Results

    UPDATE July 2011 : Read the update at the top.

    Now the results: I was able to obtain a transferspeed (read) of 155 MB/s with a file copy using NFS. Normal transfers capped at 109 MB/s. To be honest: I had hoped to achieve way more, like 180MB/s. However, the actual transfer speeds that will be obtained will depend on the hardware used. I recommend using Intel or Broadcom hardware for this purpose.

    Also, I was not able to obtain write speed that surpasses the 1 Gb/s. Since I used a fast RAID array to write the data to, the underlying storage subsystem was not the bottleneck.

    So the bottom line is that it is possible to get more than 1 Gb/s but the performance gain is not as high as you may want to.

    Configuration:

    Client:

    modprobe bonding mode=0
    ifconfig bond0 up
    ifenslave bond0 eth0 eth1
    ifconfig bond0 10.0.0.1 netmask 255.255.255.0
    

    Server:

    modprobe bonding mode=4 lacp_rate=0 xmit_hash_policy=layer3+4
    ifconfig bond0 up
    ifenslave bond0 eth0 eth1
    ifconfig bond0 10.0.0.2 netmask 255.255.255.0
    

    Bonding status:

    cat /proc/net/bonding/bond0
    
    Ethernet Channel Bonding Driver: v3.3.0 (June 10, 2008)
    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer3+4 (1)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    
    802.3ad info
    LACP rate: slow
    Active Aggregator Info:
    Aggregator ID: 2
    Number of ports: 2
    Actor Key: 9
    Partner Key: 26
    Partner Mac Address: 00:de:ad:be:ef:90
    
    Slave Interface: eth0
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:co:ff:ee:aa:00
    Aggregator ID: 2
    
    Slave Interface: eth1
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:de:ca:fe:b1:7d
    Aggregator ID: 2
    

Page 1 / 1