Mellanox Advances 'In-Network Computing' with ConnectX-5 Adapter.
Networking specialist Mellanox has announced ConnectX-5, the next-generation of its 100G InfiniBand and Ethernet adapter line. The company says the new device will help organizations take advantage of real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and 'Internet of Things' applications.
ConnectX-5 was designed to connect with any computing infrastructure – x86, Power, GPU, ARM, and FPGA – and it employs a variety of offload engines, which can be classified into two camps. The more established offloading capability supports network functions, such as RDMA, transport offload, and SR-IOV. There's also a new generation of acceleration engines which are running data algorithms, essentially making the ConnectX-5 a coprocessor.
Significant for HPC, ConnectX-5 continues the approach begun with Switch-IB2 and moves more MPI capabilities into the network. While Switch-IB2 offloads MPI collectives for running on the switch architecture, ConnectX-5 enables MPI Tag Matching and MPI AlltoAll operations, as well as advanced dynamic routing.
With ConnectX-5 and Switch-IB2, 60 percent of the MPI algorithms are now being executed on the network, said Mellanox's Gilad Shainer. "Looking ahead, we're probably going to see the entire MPI moved to the network as part of the co-design approach," he added.
ConnectX-5 also exposes what Mellanox is referring to as in-network memory. With a small memory address space accessible by the application, data can be stored or made accessible on the network devices with the goal of enabling faster reach from different endpoints.
Mellanox positions the offloading approach as part of the larger transition to co-design principles that mine synergies between software and hardware or between the different hardware components. "The way to solve the performance bottlenecks that are now emerging is by running different algorithms in different places," said Shainer. "ConnectX-5 is the first adapter that brings the co-design architecture into the NIC side."
"Ten years ago process runtime or MPI collective approaches were running at hundreds of microsecond latencies," he went on to explain. "Network device latencies were in the range of tens of microseconds, so it was a big part of the overall latency. Fast forward to today and process latencies are in the range of tens of microseconds and network device latency is running about 100 nanoseconds. The question we're addressing is how do you make another performance improvement in the process latency – move from 10 microseconds to a low single digit of a microseconds – when CPU frequency doesn't go faster."
"Computing within network devices makes sense when multiple nodes need to act on the same data," observed Addison Snell, CEO of analyst firm Intersect360 Research. "In essence, it's the complement to pushing a computation all the way to a GPU with something like RDMA and you don't have to move the data off of the GPU in order to compute on it. If something's extremely local, it can be – on the one side – all the way down at the processing element on the node, but at the other end of the spectrum where it's something that's shared between nodes, it can be more effective to do it in the network as opposed to in the microprocessor."
The offloading approach that Mellanox is championing and delivering on is in direct contrast to the CPU-centric approach, espoused by Intel. Mellanox believes offloading is essential to increasing CPU performance, while Intel is essentially following a system-on-a-chip strategy that as part of its Scalable System Framework offers the simplicity of tight-knit hardware-software stack. Today's system architecture is still very CPU-centric, but Mellanox and others are advancing a different architectural approach based on specialized best-of-breed components.
Intel's position is that everything will work better together if it's integrated onto a single chip, observed Snell. "Now Mellanox is countering by giving powerful counter examples of how things can be engineered for higher performance when they're not integrated onto the chip, things like in-network computing or their MCM features; those argue against having things all integrated onto a chip — which the market will prefer is certainly yet to be determined."
"Omni-Path is certainly a formidable announcement that Mellanox has to compete against," he continued, "but Mellanox has interesting differentiation. I don't think they're going just to get mowed over by Intel; I think this will come down to user preference and how they like to see their system architected."
A 100 Gigabit-per-second NIC, ConnectX-5 enables a reported 600 nanoseconds end-to-end latency within the datacenter (the latency of ConnectX-5 in the range of 100 nanoseconds). From the previous generation, ConnectX-5 takes message rate performance from 150 million messages per second to 200 messages per second, a 30 percent increase. In terms of how this stacks up with the competition, Shainer claimed a 2x performance advantage over the first-generation Omni-Path Architecture (OPA) adapters, which he notes are capable of 89 messages per second, based on a benchmark released by Intel earlier this year. Intel product literature puts architecture maximums for the OPA adapter technology at 160 million messages per second.
ConnectX-5 also has some other new features, including support for PCIe 4.0 (expected next year). There's an integrated PCIe switch to connect multiple PCIe devices or SSDs to the network adapter. Notably, there are also capabilities for enabling different topologies for datacenters. As one example of this flexibility, an organization can chain or collectively ring multiple adapters together without using a switch to create a small cluster.
Beyond HPC, there are more acceleration engines available for cloud infrastructures. ConnectX-5 includes an embedded switch so when you run multiple virtual machines or guest OS's, instead of virtual machines needing to go to the switch for doing routing of data between the machines, that routing of data will be done within the NIC. It also brings offloads for NVMe to support NVMe over fabrics, RDMA, and other capabilities, according to Mellanox.
"This is the next logical step in Mellanox's roadmap," said Snell of the new Mellanox adapter. "They're moving everything to consistent 100 Gigabit capability whether you're on InfiniBand or Ethernet across these different networking cards, components and switches. Everything has to be able to connect at that high-bandwidth speed or else the data doesn't move across the system well enough. And if the data doesn't move across the system fast enough, then it doesn't matter how fast your processor is, it just sits there starved waiting for data."
Mellanox says it will start shipping ConnectX-5 in Q3 of this year.
Source: www.bing.com
Images credited to www.bing.com and