Greed for speed is right, it works

Greed is good…to quote Gordon Gekko. A data center manager might rephrase this as “speed is good.” Faster and faster is the way to go when it comes to the world of networks and storage. More and more speed aka bandwidth (the capacity of the data pipe) while reducing latency (time taken by data to travel from source to destination) and CPU utilization (extent to which the CPU is used to manage the logistics of moving data). All key metrics for the folks running data centers especially the hyperscale cloud data centers. As an aside (albeit unrelated), the distinction between bandwidth and latency is something that does occasionally create confusion among consumers, something that cable and Internet providers seem to understand when they offer higher and higher bandwidths to those who may not necessarily need it..

In the beginning, we had separate computer networks (with Ethernet as a popular choice) and storage networks (where Fibre Channel was frequently encountered). Two independent worlds with their own switches and cables. Then Ethernet began to take off.  It has come a long way from the ~3 Mbps bandwidth days when it was first introduced in the ‘80s. 1Gbps is the current mainstay Ethernet bandwidth and 10Gbps Ethernet is expanding its footprint. Today 25 Gbps and 50 Gbps Ethernet standards are on their way with large vendors working hard to accelerate the schedule (picking up the baton from standards bodies). iSCSI storage came in and offset the Fibre Channel disadvantages of higher cost and requirement for an additional skills base by running over the familiar, standard Ethernet fabric.

Bandwidth in Gbps has been the primary metric of comparison. But latency was also important, especially for areas like high performance computing, clustering, and storage. An alternative to Ethernet, Infiniband became the go-to platform for delivering low latency (along with high bandwidth) based on the Remote Direct Memory Access (RDMA) architecture that pretty much takes the load off the CPU when it comes to interconnect performance, dropping latency down to sub micro seconds. However Infiniband requires its own, parallel IB infrastructure to run on.

With increasing transaction volumes and the advent of big data, the need for higher and higher performance began to manifest itself in corporate data centers. Without the overhead of managing redundant infrastructures. Which meant that the data center needed to converge on a single interconnect technology. Ethernet became the logical choice as it was all over the place and many people understood it very well.

Thus the Fibre Channel group came up with Fibre Channel over Ethernet (FCOE). Though FCOE required lossless transmission with things like data center bridging. Running on existing Ethernet gear without having to make any changes, iSCSI became more and more popular. By riding the accelerating Ethernet train, iSCSI was able to offer ever increasing speeds. Well engineered iSCSI networks could offer really low latencies as well.

Also RDMA technologies began to converge on Ethernet. iWarp offered RDMA over the standard TCP/IP stack with just the adapter needing to be changed. RDMA over Converged Ethernet (RoCE) got into the mix running on Ethernet but leaning on Infiniband for the transport layer and requiring lossless Ethernet and specialized switches. Does seem like a parallel of sorts between FCOE and ROCE on one hand and iSCSI and iWARP on the other.

On the face of it, iWarp looks like a relatively easier to deploy solution when compared to ROCE, lighter on the budget, yet still offering RDMA advantages. Not having to change the existing network seems like a big plus. The challenge with RDMA however is that it does not use the sockets API. Instead it relies on a “verbs API.” Applications may need to be rewritten to take full advantage of the power and performance that RDMA brings to bear. Yet embedding verbs within libraries can help avoid some of the application rewrite.

RDMA based storage is beginning to show up on the radar. After all eliminating latency is critical to storage. Nothing to beat RDMA when it comes to low latency. An offshoot of iSCSI, iSCSI Extensions for RDMA (iSER) is an interface that uses Ethernet to carry block level data directly between server memory and storage. Efficiency is therefore a big plus. Using standard Ethernet switches, it keeps costs down. While providing the advantage that if iSER is not supported by the initiator or by the target, then data transfer falls back to traditional iSCSI. Seems like we should be hearing more and more about iSER in the future.

Looking ahead, we could be seeing a world where the “3 i’s” play a growing role: iSCSI, iWarp, and iSER. Converged, superfast networks and storage with practically zero latency. Speed, speed all the way. Gekko would be pleased.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s