Forgive me for I have syn-ed

Is what a successful Denial of Service (DOS) or Distributed Denial of Service (DDOS) attacker might say after taking advantage of syn-ack vulnerabilities in the TCP/IP handshake (the method used to set up the internet connection before you can start using your favorite website or gaming site). Not just one or two sy(i)ns but an entire flood of them. Enter the Syn Flood Denial of Service attack. When a client’s acknowledgment of the server’s response to a new connection request is never provided, open connections are left in its wake. The power to deny service to web resources is what the DOS attacker wields bringing many a powerful company to its knees. PayPal, Bank of America, and many more learned this truth the hard way in the past.

To say that the Internet is rife with vulnerabilities is clearly stating the embarrassingly obvious. Take the system of digital signatures and certificate authorities for example. Once a unit of software is digitally signed by a certificate issued by a well-known certificate authority, it is deemed completely trustworthy. But several stolen digital certificates later, malware signed with perfectly valid certificates has become a reality. Trust can only go so far.

As an aside, it is almost a miracle that so many of us readily flash our credit cards on websites resting assured in the confidence that the credit card provider will pick up the tab if cards get misused. If that confidence should turn out to be misplaced, I doubt any of us would proceed to shop online so freely. After all, there are several men-in-the-middle who would be happy to intercept public keys in transit and substitute them for their own, comfortably reading/modifying any and all traffic passing through them.

Today it has so easy to become an attacker/hacker really. Tools for launching attacks are readily available enabling anyone to get started on a path of online power. Willing and unwilling accomplices are in the plenty. When attackers are backed by the power of an entire nation state, the potential to inflict damage is simply gargantuan. Going from isolated websites and web servers to wide cross sections of the Internet and more.

Consider the latest DDOS attack on the DNS provider Dyn overwhelming its DNS servers with a flood of packets unleashed by an army of botnets formed from Internet of Things (IoT) devices. Network World reports that it was a TCP Syn Flood attack. An attack aimed at the Internet infrastructure provider, it literally brought a broad swath of the Internet to a standstill.

And this was aimed at just one Internet provider namely Dyn. Imagine a coordinated attack that targets a larger number of Internet infrastructure providers. That could perhaps push the brakes on the entire worldwide Internet, not just slowing down the US East Coast.

The DNS seems to have become one of the many significant weak links in the entire Internet system. After all, if the servers that resolve IP addresses to domain names become unavailable, it is not possible to go to the place you want to go to. Making the Internet unusable. With features that readily enable both attack reflection (using DNS servers to send responses to the spoofed IP address i.e. the victim) and attack amplification (inflating the size of the original request packet), the DNS appears to have become an unwitting accomplice in the DDoS attack. Add botnets to source the incoming DNS requests — with the Internet of Things as a ready supplier of vulnerable devices that lend themselves to botnets — and you have the makings of a truly exponential attack, a solid one-two punch. Sounds like a war of the worlds, Internet of Things vs. the classic Internet!

It may be useful to reflect on what this could potentially mean. With the overwhelming digital push and the all-around rush to the cloud, the dependence on the Internet has been skyrocketing. Per the website statistica: In 2015, retail e-commerce sales worldwide amounted to 1.55 trillion USD (approximately 9% of 2015 US GDP of 17.8 trillion USD) and e-retail revenues are projected to grow to 3.4 trillion USD in 2019. By 2017, 60% of all U.S. retail sales will involve the Internet in some way according to Forrester Research.

So taking the Internet out of commission in some form could bring much of commerce to a screeching halt. The impact on the global economy would, of course, be stupendous. Dare to say it could perhaps even trigger the next recession, which seems to be always waiting in the wings (but that is another story). This attack would indeed be the “sin of sins” that for many would be impossible to forget or forgive.


Digital march meets the WAN divide

Some of us may recall the unmistakable sounds of a modem connecting to the Internet.  Once the handshake was successfully negotiated, there was perhaps even a thrill experienced of having made it online. That was the world of the dial-up Internet connection. Exciting times back then. All of 40-50 kbit/second. Fast forward to the 25 Mbps and higher download speeds of today’s “always on” broadband Internet connections, and clearly we are in a different paradigm. Connecting to the Internet no longer seems as exciting, instead something mundane that could even be taken for granted.

Speaking of connectivity and networks, enterprise WANs have come a long way. From dial-up to T1/T3 to ATM to Frame Relay to MPLS, there has been a steady progression of new technologies. Though MPLS is said to be a service rather than represent a specific technology choice and could have ATM, Frame Relay, or even Ethernet at its core. Circuit switching to packet switching, so the transition happened. A trait of many of these WAN solutions appears to have been that they were very complex and called for specialized expertise. Also some of them even ran into the millions of dollars per month, so not cheap. The introduction of MPLS was possibly a game changer. But there don’t seem to have been very many technological advances in WAN technology over the last 15 years or so since MPLS came in.  As an aside, this appears to reflect what some have observed to be an absence in fundamental tech advances per se in the last decade or two.

In the past, all was quiet and stable on the enterprise WAN. Users were either at the head office or at branch offices. Applications resided at the corporate office data center. Most of the traffic was branch to head office through the enterprise WAN. Internet traffic would traverse the WAN to the data center (backhauled) before being handed off to the ISP then on. MPLS met this use case very well. There was perhaps no need to make significant advancements to the WAN. Everyone was “happy” with the status quo.

Then things started to change. Lots of reasons. Marked by a rising use of software, the world began to go digital. More and more, the software started to be accessed from the cloud. Public and hybrid cloud became popular choices. There was an increasingly insatiable appetite for consuming video, boosting demands on bandwidth. VDI or Virtual Desktop infrastructure installs added to the clamor for higher connectivity. Then there were the remote users who don’t work from branch offices. Not to forget the Internet of Things with its billions of devices slowly but surely coming online and hankering for even more bandwidth. No wonder that the enterprise WAN began to come under pressure.  MPLS was not geared up for such an intensely interconnected world.

Enter SD-WANs (Software Defined WANs). They promise to meet the demand by virtualizing connectivity options and offering an abstraction layer on top. So the underlying links could be either MPLS or the regular Internet. In order to optimize network reliability and availability, the software would “magically” send traffic through the appropriate link automatically, behind the scenes. There is even talk that SD-WAN could replace MPLS altogether and route traffic entirely through the Internet while meeting QoS and SLA guarantees. Though many users would probably balk at doing away with MPLS. After all who wants to take on too much risk? Therefore, the future seems to lie in a hybrid WAN architecture, part MPLS part Internet.

SD WANs therefore certainly look good from the enterprise standpoint, essentially more bandwidth and performance for significantly less outlays. A no brainer, one would say. But how about examining this scenario from the ISP end? Service providers are seeing traffic on their backbone networks go up tremendously. However much of this traffic increase is low revenue or no revenue. The investments needed to make improvements to the backbone network (some of which are based on circuit switching based equipment circa 1990s) are not matched by a corresponding rise in revenues from the investment.

Consequently, there appear to be two possibilities. Either the infrastructure investments are not made in which case performance could start to slide. Or ISPs could start to raise prices for regular broadband Internet access (assuming there are no regulatory constraints). Anyways you look at it, the model seems to start to break down.

Perhaps SD-WAN is a temporary solution, a placeholder if you will. More fundamental innovations are probably required in WAN technology to meet the surging demand for bandwidth and service levels. Clearly a new equilibrium has to be found at the intersection of demand for bandwidth from enterprises (and consumers) and the supply of bandwidth from service providers. Assumptions of unlimited broadband and unlimited cloud may not be tenable. An all-digital world may have to wait for the network to catch up, the WAN divide may yet slow the digital march.


Data storage: Sink or swim

The weather is changing. So is data storage. What is the connection? Weather inspired tech industry metaphors, for one. Starting with the cloud itself! And some more spawned by the accelerated pace of data growth. Consider “deluge” or “flood” of big data. How about being “inundated” by data?

Of course the sheer scale of data growth is lending itself to the dramatization. Here are some oft cited stats. 90% of the world’s data has been generated over the last two years is a quote from 2013. Every day, we create 2.5 quintillion bytes of data is another one. An aside: it appears that Americans and British have their own definitions of a quintillion — 10^18 in the US and 10^30 in Great Britain (per Dictionary.com). Unstructured data accounts for most of this data growth:  An analyst firm graph that depicted storage usage over time showed 70 exabytes of the 2014 projection of 80EB of storage coming from unstructured data (an exabyte (EB) is 10^ 18, which seems to match the US definition of a quintillion).

With all this action happening on the data growth front, something’s gotta give when it comes to storing it. That something includes the legacy SAN and NAS solutions based on block and file storage that are slowly but surely giving way to newer storage technologies. At this time it may be instructive to take a quick detour into the history of storage: DAS, SAN and NAS. In the beginning, there was DAS directly attaching drives to the server. Got to attach the storage somewhere after all. Then the storage moved away from the servers onto the network. NAS was the first network storage option with remote file access, arriving during the 1980s. Shifting to block storage, Fibre Channel came in during the 1990s and iSCSI in the 2000s. Accelerating Ethernet speeds have led to a decline in Fibre Channel adoption. FC folks introduced Fibre Channel Over Ethernet (FCOE) but that turned out to be somewhat of a non-starter. One of the reasons appears to have been that there were as many FCOE versions as there were vendors. RAID arrays too are falling by the wayside. Given the rate of data growth, the time to rebuild disks started to become unmanageable.

Today there are lots of terms at play in what can seem like a hyperactive, somewhat confusing market. We have hyperconverged, with tightly integrated compute and storage, and hyperscale where compute and storage scale out independently, indefinitely in a cloud-like fashion. Indeed hyperscale is what the web giants, the Googles, Amazons, and Facebooks use. To add to the confusion, a term that sounds similar is Server SAN that is not to be mixed up with the traditional SAN, this represents software-defined storage on commodity servers with directly attached storage (DAS) aggregating all the locally connected storage. Going back to the future with the humble DAS.

But hyperscale storage is where things appear to be headed. Object storage is the technology underlying hyperscale storage, which is already deployed in the public cloud. It is for storing unstructured data, which is clearly where the need is. Not suitable though for structured, transactional data, which is the realm of block storage. Hence block storage will likely not go away anytime soon. On the other hand, legacy NAS is quite another thing; Object storage is chipping away at its share since it does a much better job with the millions and billions of files.

Toward reining in the runaway growth in data, object storage seems just what is needed. However as in most things in life, there is a catch. For getting data in and out, object storage uses a RESTful API. Yet there does not appear to be a widely accepted industry standard specification for this API. As the 800 pound gorilla in the room, Amazon gets to position the S3 RESTful API as the defacto standard. Cloud Data Management Interface (CDMI) is the actual industry standard created by the SNIA. This should have been the standard that everyone aligned with, but it seems to have few takers. OpenStack Swift appears to have had more luck with its RESTful API. Therefore there seem to be at least three different standards and vendors get to pick and choose. The question is what happens when Amazon decides to change the S3 standard, which they are perfectly entitled to do given that the standard belongs to them. Presumably there will be a scramble among vendors to adapt and comply. Amazon literally seems to have the industry on a leash. Since they were pretty much the first cloud based infrastructure services provider, they could say with some justification they invented the space.

Perhaps users need to step up and use the power of the purse to straighten things out.  Unlikely we can solve this via regulatory fiat! When the data torrents come rushing in and the time comes to start swimmin’, hopefully Amazon is not the only game in town to stop storage systems from sinking like a stone.


Routing all over the world

All over the world, everybody got the word. This could well describe computer networks across the globe getting their routing updates from each other. Making it possible for data to flow across networks is where the world of routing comes in. There are several types of routing so you do get to pick and choose what would work best for you.

You have static routes that are manually configured in the routers, typically in smaller environments. And dynamic routes where routers “magically” discover routes for themselves, figuring out who their neighbors are, determining best paths, sending traffic along one hop at a time, and automatically making changes or corrections when networks go down. Interior gateway protocols like RIP (stands for Routing Information Protocol, not what you might think) and OSPF (Open Shortest Path First) that reside within so called autonomous systems (managed by a single administrative entity like an ISP, a CDN, or a large enterprise) are what most typical enterprises are probably familiar with. Classless routing allows for subnet masks (the mask denotes the network portion of the IP address) other than the class default. Whereas with classful routing you are pretty much stuck with the class default subnet mask. As an aside, interesting play on words there with the classless type lording it over the classful variety.

Distance vector routing protocols select the best routing path based on distance (number of hops in RIP) and direction (which interface does it get out the door). On the other hand, link state protocols choose their paths by determining the status of each link and identifying the path with the lowest metric, with an understanding of the full scope of the topology. EIGRP was a Cisco proprietary protocol until Cisco opened it up, well sort of opened it. But EIGRP does not appear to be the network engineer’s best friend despite claims of how EIGRP captures the best of the distance vector and link state worlds (unsurprisingly CCNA class material does tend to go gaga on EIGRP)! That role seems to have been usurped by OSPF. Then there is BGP that lives outside the ASes, a variation on the distance vector type.

Ultimately the traffic has got to get to the right place. Either within a campus network or ISP. Or across the broader Internet. Loops where packets spin round and round in circles are to be avoided. Earlier versions of RIP were kind of “loopy,” which was fixed by updates like maximum hop counts, route poisoning, and hold-on timers. OSPF moved up routing several notches with its triggered updates, multiple tables, and complex algorithms.

However the prize for the most interesting routing protocol should probably go to BGP or the Border Gateway Protocol. Called the glue of the Internet, it helps connect the routers of the largest service providers that form the core of the Internet.

BGP advertisements contain the destination network prefix and the path to get to it. Multiple ASes sort of daisy chain to each other, appending their individual AS number to the announcement, so that the advertisement becomes a cumulative sequence of ASes that lead to the destination network. So essentially you have an announcement that says here I am and here are the ASes you need to traverse in order to find me. Traffic flows in the opposite direction to route announcements.

Though BGP appears to have some well-known flaws that could lead to a network or set of networks getting hijacked. Sounds dramatic, doesn’t it? Someone could inadvertently or maliciously announce that they contain network prefixes that in fact lie elsewhere. In certain circumstances, this spurious announcement could propagate and traffic could start to flow to the wrong network. For example someone could conceivably say that they contain prefixes that lie within the US government and if the announcement is made correctly then it is entirely possible that traffic intended for the US government could flow to the bad actor.

Corrections for this flaw are being through route filtering and BGP variants like BGPSEC with digitally signed prefixes and paths. Yet getting everyone to implement BGPSEC seems like a tall order. As a Cisco engineer reportedly said, “I can either route them or count them, what do you want me to do?” For now, it appears that snooping on the Internet, big time stuff, is well within the realm of possibility. Traffic that is hijacked could disappear into a black hole or could be tapped and nudged back onto the correct destination, leaving practically no trace of any tampering. Exploiting BGP could be a party for some folks (stealing bitcoins through a BGP hijack has been reported). Everybody got the word on routes zipping around. Somebody somewhere could be getting snoopy with BGP.


Greed for speed is right, it works

Greed is good…to quote Gordon Gekko. A data center manager might rephrase this as “speed is good.” Faster and faster is the way to go when it comes to the world of networks and storage. More and more speed aka bandwidth (the capacity of the data pipe) while reducing latency (time taken by data to travel from source to destination) and CPU utilization (extent to which the CPU is used to manage the logistics of moving data). All key metrics for the folks running data centers especially the hyperscale cloud data centers. As an aside (albeit unrelated), the distinction between bandwidth and latency is something that does occasionally create confusion among consumers, something that cable and Internet providers seem to understand when they offer higher and higher bandwidths to those who may not necessarily need it..

In the beginning, we had separate computer networks (with Ethernet as a popular choice) and storage networks (where Fibre Channel was frequently encountered). Two independent worlds with their own switches and cables. Then Ethernet began to take off.  It has come a long way from the ~3 Mbps bandwidth days when it was first introduced in the ‘80s. 1Gbps is the current mainstay Ethernet bandwidth and 10Gbps Ethernet is expanding its footprint. Today 25 Gbps and 50 Gbps Ethernet standards are on their way with large vendors working hard to accelerate the schedule (picking up the baton from standards bodies). iSCSI storage came in and offset the Fibre Channel disadvantages of higher cost and requirement for an additional skills base by running over the familiar, standard Ethernet fabric.

Bandwidth in Gbps has been the primary metric of comparison. But latency was also important, especially for areas like high performance computing, clustering, and storage. An alternative to Ethernet, Infiniband became the go-to platform for delivering low latency (along with high bandwidth) based on the Remote Direct Memory Access (RDMA) architecture that pretty much takes the load off the CPU when it comes to interconnect performance, dropping latency down to sub micro seconds. However Infiniband requires its own, parallel IB infrastructure to run on.

With increasing transaction volumes and the advent of big data, the need for higher and higher performance began to manifest itself in corporate data centers. Without the overhead of managing redundant infrastructures. Which meant that the data center needed to converge on a single interconnect technology. Ethernet became the logical choice as it was all over the place and many people understood it very well.

Thus the Fibre Channel group came up with Fibre Channel over Ethernet (FCOE). Though FCOE required lossless transmission with things like data center bridging. Running on existing Ethernet gear without having to make any changes, iSCSI became more and more popular. By riding the accelerating Ethernet train, iSCSI was able to offer ever increasing speeds. Well engineered iSCSI networks could offer really low latencies as well.

Also RDMA technologies began to converge on Ethernet. iWarp offered RDMA over the standard TCP/IP stack with just the adapter needing to be changed. RDMA over Converged Ethernet (RoCE) got into the mix running on Ethernet but leaning on Infiniband for the transport layer and requiring lossless Ethernet and specialized switches. Does seem like a parallel of sorts between FCOE and ROCE on one hand and iSCSI and iWARP on the other.

On the face of it, iWarp looks like a relatively easier to deploy solution when compared to ROCE, lighter on the budget, yet still offering RDMA advantages. Not having to change the existing network seems like a big plus. The challenge with RDMA however is that it does not use the sockets API. Instead it relies on a “verbs API.” Applications may need to be rewritten to take full advantage of the power and performance that RDMA brings to bear. Yet embedding verbs within libraries can help avoid some of the application rewrite.

RDMA based storage is beginning to show up on the radar. After all eliminating latency is critical to storage. Nothing to beat RDMA when it comes to low latency. An offshoot of iSCSI, iSCSI Extensions for RDMA (iSER) is an interface that uses Ethernet to carry block level data directly between server memory and storage. Efficiency is therefore a big plus. Using standard Ethernet switches, it keeps costs down. While providing the advantage that if iSER is not supported by the initiator or by the target, then data transfer falls back to traditional iSCSI. Seems like we should be hearing more and more about iSER in the future.

Looking ahead, we could be seeing a world where the “3 i’s” play a growing role: iSCSI, iWarp, and iSER. Converged, superfast networks and storage with practically zero latency. Speed, speed all the way. Gekko would be pleased.


Ring, ring, the happiest sound of them all

The happy sound of ringing phones has been part of everyday life for a while. Hard to imagine how things must have been like before phones came along. 1876 was the year that Alexander Graham Bell got his patent for the first phone. Establishing the public switched telephone network (PSTN) was indeed a huge step forward in bringing the world closer together. Say what you will about the PSTN but it is essentially reliable (if done right). In case of a disaster, the PSTN is probably the only option left standing after the power goes out. Phones have certainly come a long way from the rotary phones of yesteryears (that carry a strong whiff of nostalgia today). It doesn’t seem that long back when we had operator assisted calling with operators acting as “human switches,” manually completing circuits. Automated switching systems followed. From an analog system (POTS, Plain Old Telephone Service), the PSTN went digital with the advent of ISDN. And a new era was born.

Enter Voice over Internet Protocol (VOIP): with protocols like the Session Initiation Protocol (SIP) for signaling and others like SDP for describing the communication session, RTP for real-time media delivery, RTCP for call stats and so on. VOIP introduced a whole slew of new features, it was certainly transformational. Wideband audio stepped up audio quality. No longer just a simple phone, it became a computer masquerading as a phone. Though interoperability issues did rear their head. Different vendors’ VOIP platforms didn’t talk to each other readily. Serving as the intermediary is the PSTN. Multiple codec standards require transcoding to stay in sync (codec, which stands for coder-decoder, converts audio signals into a compressed digital format for transmission and back into an uncompressed format for replay).

Today the PSTN appears to be transitioning to an all IP network. How beneficial this change will be for customers is not really that clear though. On the other hand, carriers do stand to make several gains from the shift: Cost reduction, easing of tech staffing challenges, and so on. Then there are the regulatory advantages that carriers may be seeking. Regulations that have thus far enabled all Americans to have access to telephone communications regardless of income or where they lived.

End of the day, the unfettered ability to communicate is what this is about. Reducing the dependence on communication providers would certainly help the cause of unrestricted communication. After all the Internet is a public resource. Anyone should be able to use it in order to communicate. Providing a solution to the need for open communication seems to be the app model. Where anyone can just buy/download an app and get going.

Web Real-Time Communication (WebRTC) appears to promise this very benefit. Browser based calls with no plugins or software to install. Peer to peer communication. (Would have been perfect but for these pesky Network Address Translators (NATs) and firewalls. Need to deal with specialized protocols like STUN and TURN to overcome the NAT hurdle.) An open source technology stack that anyone can utilize. Leveraging the WebRTC APIs, one can build apps that deliver conferencing, voice/video calling, data transmission, and more. Use of SRTP (in lieu of RTP as in VOIP) ensures secure, encrypted communication. Snooping on VOIP calls is easier as the basic RTP is not secure.

Got to thank Google for making WebRTC available.

But this may not work very well if carriers sensing a loss of their phone/voice services increase rates for Internet access. Which is where the recent decision by the FCC to regulate the Internet as a utility should come in handy. Nationalizing Internet service does not seem that farfetched either. Like Medicare, Social Security… If the Internet has to fulfil its role as a catalyst for innovation, everyone needs to have access to it, not just those can afford it.

Ultimately communication should be a commodity, freely available to all. Much like web access, all that is needed should be an Internet connection. Commoditizing communication seems to have been Google’s strategy, clearly a brilliant one at that. Basically we are talking about bring your own, high quality communication system (BYOCS?). Making the technology stack available to the large community of web developers instead of relying on the smaller number of telecom and VOIP engineers. There appears to be close to 3 million people on LinkedIn with JavaScript in their profiles. Should really open the innovation tap, adding new and exciting ways to communicate. We are entering the world of real-time communication systems. That the phone as we know it will stay in its current form factor seems unlikely. Instead it will likely be a bunch of apps that could run on any device, enabling anytime communication from anywhere. How about making calls from the refrigerator? Hopefully app developers will not drop the ringing tone in order to signal an incoming call. Retaining the happy, familiar sound should provide some continuity and reassurance as we march into the new communications order.


Slow ride (on the IPv6 train), take it easy

Imagine riding a train where no one knows when it will reach its destination. Or if it will ever get there. Or worse still if it could even get re-routed along the way to a new destination. And if matters weren’t bad enough, this is the only train to get you where you are going. So you have no option but to take your chances. And ride along. Sounds like a bad dream. Nonetheless that is what the move to Internet Protocol version 6 (IPv6) has often seemed like.

Proposed in 1995 and adopted in 1999, IPv6 adoption has been moving at a considerably less than glacial pace (bit of irrelevant trivia here: the fastest glacier is apparently in Greenland moving over 40 meters in a day). Reliable, error free, efficient end-to-end communication between uniquely identifiable end nodes was the idea with which the Internet (TCP/IP) was conceived. That required each end point to have a unique address. This was all very well during the early Arpanet days. IPv4 was developed for the Arpanet in 1978 with a limit of 4 billion addresses (2 32). Trouble is the 4 billion number quickly turned out to be inadequate. Fast forward to the 2000’s and shortages began to show up. In 2011, the IANA free pool hit zero. Four of the five Regional Internet Registries have run out of addresses. ARIN, which includes the US and Canada, has been the last of the RIR’s to exhaust “IPv4 supplies.”  On the other hand with its 128 bits, we should always have enough IPv6 addresses to go by.

Ideally by now IPv6 should have been waiting in the wings, ready to step in, and take over. But that has not come to pass. Instead IPv6 adoption crossed eight percent in August this year per Google. Therefore about 92 percent of the Internet is still on IPv4. Which means demand for IPv4 is not going away any time soon. With this demand vs. supply imbalance, we should be seeing prices for buying IPv4 addresses in the “secondary market” on the rise. Folks holding on to IPv4 blocks likely have a nicely appreciating asset on their hands, at least in the short term. Should make CFOs happy!

Unfortunately IPv6 was not designed to be backward compatible, which means that the two versions 4 and 6 cannot directly talk to each other without a translator in the middle. If that weren’t the case perhaps IPv6 would have been phased in much sooner. For implementing an end-to-end IPv6 communication, both the end points as well as the intervening network have all got to support IPv6, a requirement that made the transition much harder.

Consequently an IPv4 future is what lies ahead for the next few years. Between the rates of IPv4 depletion and IPv6 transition, there has been quite a mismatch. How has the world been coping? Enter Network Address Translation (NAT) to the rescue (among other workarounds). We started off with NATs at the customer premises. These translated public IPv4 addresses to private addresses inside the customer network. Solved the problem to some extent as not everyone needed a public IPv4 address at least initially. Yet that was not enough to meet surging demand. Then ISPs introduced the next layer of NAT at their end, the Carrier Grade NAT or CGN. Essentially what it did was to assign private addresses to customers and map these to public addresses at the upstream ISP end. Two levels of translation, one on top of the other. Basically ISPs started sharing a single IPv4 address among multiple customers (could be hundreds and even thousands of customers behind a single address). Looking at the packet header, there was no way to identify the user behind it. Only the ISP running the CGN had the information to map packets to individual users. Some law enforcement implications here for sure.

Apparently the CGN solution has not been a big problem for the regular web users: email, web browsing and so on. Where the CGNs have significantly impaired performance has been in things like VOIP, video streaming, and online video gaming (all very popular, growth oriented use cases). What is the benefit to the ISP? One, they can postpone replacing thousands of boxes across their network. Two, extending the life of IPv4 helps realize gains from their own IPv4 stores (estimates run between $10 and $15 per IP address, these numbers should be on the climb). As an aside, trading in IPv4 addresses could become quite profitable. Three, vendors of the expensive CGN boxes certainly have an incentive to keep pushing more of these. A sobering possibility that arises is that the investment in CGN itself acts as a further disincentive to move on to IPv6. Indeed some ISPs might be tempted to just keep on adding the CGNs and pretty much put off IPv6 for a long time.

All this appears to have some parallels with the Y2K shift. Applications needed to be changed to support the four digit date field. There was the need for investment. Of course there was a hard, impossible to avoid deadline. Like it or not the Year 2000 was going to arrive! No workarounds were possible. Hence the move to Y2K was implemented in a timely manner and the year 2000 went off without a hitch. In the case of IPv6, there is no such hard deadline. No pressing need to make the switch. If it can wait, then it will wait. Running out of addresses should have provided the urgency but that was neutralized by NAT boxes among others. There has even been the suggestion that the IPv6 shift could possibly get derailed and converted instead into a CGN transition. Though that does seem to be a remote prospect.

Meanwhile the IPv6 train continues to chug along albeit at a slightly faster pace now. Destination still not in sight. As a virtue, patience for the passengers is strongly recommended. Real slow ride after all. Might as well sit back, relax, take it easy, and enjoy it while it lasts.