From Mark Furneaux's Wiki
Revision as of 12:30, 23 August 2014 by Mark Furneaux (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Infiniband, often abbreviated IB, is a high-speed hardware networking interconnect commonly used in high performance computing (HPC) systems and supercomputers.


Infiniband's standard link speed is 2.5Gbps. The original protocol uses 10 bit symbols, of which only 8 bits can be used for data transfer. This leaves a usable data bandwidth of 2Gbps. This standard link speed is referred to as the Standard Data Rate or SDR. Further versions of the standard implemented Double Data Rate or DDR, which doubles the SDR to 5Gbps (4Gbps usable). Other standards also exist such as Quad Data Rate (QDR), FDR, EDR, and more. Each speed grade is usually aggregated in an interface yielding a final speed of either 1x, 4x, 8x, or 12x the base speed. For example, DDR with an aggregate of 4x yields 20Gbps signal rate with 16Gbps data rate.

Cables and Connectors

Infiniband originally began by using the SFF-8470 connector, known more commonly as the CX4 connector. This connector is also used by SAS (Serial Attached SCSI), and there are cables which allow IB cards to be used as hostbus adapters for disk drives using iSCSI. The 32-pin CX4 connector comes in 3 variants, one which uses thumb screws, and 2 which use latches. The latch and screw type cables are keyed differently and are incompatible with each other, however crossover cables exist. Short and lower speed cables can be made using direct copper, however long cables and those utilising the newer speed standards require active fibre cables with built-in optical transceivers. Newer speed standards usually require the more modern SFP connectors commonly used with Ethernet.


Infiniband never had a standardised API, lending to many "extensions" being developed for different methods of data transfer. A non-profit organisation, the OpenFabric Alliance, has standardized several extensions across multiple platforms. Possibly the most well recognized is IPoIB or Internet Protocol over Infiniband. This allows existing applications that utilize a TCP/UDP protocol to operate over the IB link. This however is suboptimal as IB operates very differently to other IP interconnects such as Ethernet and therefore cannot realize the full speed of the connection. RDMA or Remote Direct Memory Access, is another more interesting extension which allows network cards to exchange data directly from the main memory of one computer to another without invoking the operating system. This results in low overhead, high throughput, and low latency.


Infiniband, while designed for HPC, can be used in more conventional areas. Older IB network cards are much less expensive than their equivalent Ethernet cards, especially second hand. This allows for the creation of high bandwidth links for things like file servers at very low relative cost. NFS for example natively supports RDMA.

See Also