Uncategorized

Routing all over the world

All over the world, everybody got the word. This could well describe computer networks across the globe getting their routing updates from each other. Making it possible for data to flow across networks is where the world of routing comes in. There are several types of routing so you do get to pick and choose what would work best for you.

You have static routes that are manually configured in the routers, typically in smaller environments. And dynamic routes where routers “magically” discover routes for themselves, figuring out who their neighbors are, determining best paths, sending traffic along one hop at a time, and automatically making changes or corrections when networks go down. Interior gateway protocols like RIP (stands for Routing Information Protocol, not what you might think) and OSPF (Open Shortest Path First) that reside within so called autonomous systems (managed by a single administrative entity like an ISP, a CDN, or a large enterprise) are what most typical enterprises are probably familiar with. Classless routing allows for subnet masks (the mask denotes the network portion of the IP address) other than the class default. Whereas with classful routing you are pretty much stuck with the class default subnet mask. As an aside, interesting play on words there with the classless type lording it over the classful variety.

Distance vector routing protocols select the best routing path based on distance (number of hops in RIP) and direction (which interface does it get out the door). On the other hand, link state protocols choose their paths by determining the status of each link and identifying the path with the lowest metric, with an understanding of the full scope of the topology. EIGRP was a Cisco proprietary protocol until Cisco opened it up, well sort of opened it. But EIGRP does not appear to be the network engineer’s best friend despite claims of how EIGRP captures the best of the distance vector and link state worlds (unsurprisingly CCNA class material does tend to go gaga on EIGRP)! That role seems to have been usurped by OSPF. Then there is BGP that lives outside the ASes, a variation on the distance vector type.

Ultimately the traffic has got to get to the right place. Either within a campus network or ISP. Or across the broader Internet. Loops where packets spin round and round in circles are to be avoided. Earlier versions of RIP were kind of “loopy,” which was fixed by updates like maximum hop counts, route poisoning, and hold-on timers. OSPF moved up routing several notches with its triggered updates, multiple tables, and complex algorithms.

However the prize for the most interesting routing protocol should probably go to BGP or the Border Gateway Protocol. Called the glue of the Internet, it helps connect the routers of the largest service providers that form the core of the Internet.

BGP advertisements contain the destination network prefix and the path to get to it. Multiple ASes sort of daisy chain to each other, appending their individual AS number to the announcement, so that the advertisement becomes a cumulative sequence of ASes that lead to the destination network. So essentially you have an announcement that says here I am and here are the ASes you need to traverse in order to find me. Traffic flows in the opposite direction to route announcements.

Though BGP appears to have some well-known flaws that could lead to a network or set of networks getting hijacked. Sounds dramatic, doesn’t it? Someone could inadvertently or maliciously announce that they contain network prefixes that in fact lie elsewhere. In certain circumstances, this spurious announcement could propagate and traffic could start to flow to the wrong network. For example someone could conceivably say that they contain prefixes that lie within the US government and if the announcement is made correctly then it is entirely possible that traffic intended for the US government could flow to the bad actor.

Corrections for this flaw are being through route filtering and BGP variants like BGPSEC with digitally signed prefixes and paths. Yet getting everyone to implement BGPSEC seems like a tall order. As a Cisco engineer reportedly said, “I can either route them or count them, what do you want me to do?” For now, it appears that snooping on the Internet, big time stuff, is well within the realm of possibility. Traffic that is hijacked could disappear into a black hole or could be tapped and nudged back onto the correct destination, leaving practically no trace of any tampering. Exploiting BGP could be a party for some folks (stealing bitcoins through a BGP hijack has been reported). Everybody got the word on routes zipping around. Somebody somewhere could be getting snoopy with BGP.

Standard
Uncategorized

Greed for speed is right, it works

Greed is good…to quote Gordon Gekko. A data center manager might rephrase this as “speed is good.” Faster and faster is the way to go when it comes to the world of networks and storage. More and more speed aka bandwidth (the capacity of the data pipe) while reducing latency (time taken by data to travel from source to destination) and CPU utilization (extent to which the CPU is used to manage the logistics of moving data). All key metrics for the folks running data centers especially the hyperscale cloud data centers. As an aside (albeit unrelated), the distinction between bandwidth and latency is something that does occasionally create confusion among consumers, something that cable and Internet providers seem to understand when they offer higher and higher bandwidths to those who may not necessarily need it..

In the beginning, we had separate computer networks (with Ethernet as a popular choice) and storage networks (where Fibre Channel was frequently encountered). Two independent worlds with their own switches and cables. Then Ethernet began to take off.  It has come a long way from the ~3 Mbps bandwidth days when it was first introduced in the ‘80s. 1Gbps is the current mainstay Ethernet bandwidth and 10Gbps Ethernet is expanding its footprint. Today 25 Gbps and 50 Gbps Ethernet standards are on their way with large vendors working hard to accelerate the schedule (picking up the baton from standards bodies). iSCSI storage came in and offset the Fibre Channel disadvantages of higher cost and requirement for an additional skills base by running over the familiar, standard Ethernet fabric.

Bandwidth in Gbps has been the primary metric of comparison. But latency was also important, especially for areas like high performance computing, clustering, and storage. An alternative to Ethernet, Infiniband became the go-to platform for delivering low latency (along with high bandwidth) based on the Remote Direct Memory Access (RDMA) architecture that pretty much takes the load off the CPU when it comes to interconnect performance, dropping latency down to sub micro seconds. However Infiniband requires its own, parallel IB infrastructure to run on.

With increasing transaction volumes and the advent of big data, the need for higher and higher performance began to manifest itself in corporate data centers. Without the overhead of managing redundant infrastructures. Which meant that the data center needed to converge on a single interconnect technology. Ethernet became the logical choice as it was all over the place and many people understood it very well.

Thus the Fibre Channel group came up with Fibre Channel over Ethernet (FCOE). Though FCOE required lossless transmission with things like data center bridging. Running on existing Ethernet gear without having to make any changes, iSCSI became more and more popular. By riding the accelerating Ethernet train, iSCSI was able to offer ever increasing speeds. Well engineered iSCSI networks could offer really low latencies as well.

Also RDMA technologies began to converge on Ethernet. iWarp offered RDMA over the standard TCP/IP stack with just the adapter needing to be changed. RDMA over Converged Ethernet (RoCE) got into the mix running on Ethernet but leaning on Infiniband for the transport layer and requiring lossless Ethernet and specialized switches. Does seem like a parallel of sorts between FCOE and ROCE on one hand and iSCSI and iWARP on the other.

On the face of it, iWarp looks like a relatively easier to deploy solution when compared to ROCE, lighter on the budget, yet still offering RDMA advantages. Not having to change the existing network seems like a big plus. The challenge with RDMA however is that it does not use the sockets API. Instead it relies on a “verbs API.” Applications may need to be rewritten to take full advantage of the power and performance that RDMA brings to bear. Yet embedding verbs within libraries can help avoid some of the application rewrite.

RDMA based storage is beginning to show up on the radar. After all eliminating latency is critical to storage. Nothing to beat RDMA when it comes to low latency. An offshoot of iSCSI, iSCSI Extensions for RDMA (iSER) is an interface that uses Ethernet to carry block level data directly between server memory and storage. Efficiency is therefore a big plus. Using standard Ethernet switches, it keeps costs down. While providing the advantage that if iSER is not supported by the initiator or by the target, then data transfer falls back to traditional iSCSI. Seems like we should be hearing more and more about iSER in the future.

Looking ahead, we could be seeing a world where the “3 i’s” play a growing role: iSCSI, iWarp, and iSER. Converged, superfast networks and storage with practically zero latency. Speed, speed all the way. Gekko would be pleased.

Standard
Uncategorized

Ring, ring, the happiest sound of them all

The happy sound of ringing phones has been part of everyday life for a while. Hard to imagine how things must have been like before phones came along. 1876 was the year that Alexander Graham Bell got his patent for the first phone. Establishing the public switched telephone network (PSTN) was indeed a huge step forward in bringing the world closer together. Say what you will about the PSTN but it is essentially reliable (if done right). In case of a disaster, the PSTN is probably the only option left standing after the power goes out. Phones have certainly come a long way from the rotary phones of yesteryears (that carry a strong whiff of nostalgia today). It doesn’t seem that long back when we had operator assisted calling with operators acting as “human switches,” manually completing circuits. Automated switching systems followed. From an analog system (POTS, Plain Old Telephone Service), the PSTN went digital with the advent of ISDN. And a new era was born.

Enter Voice over Internet Protocol (VOIP): with protocols like the Session Initiation Protocol (SIP) for signaling and others like SDP for describing the communication session, RTP for real-time media delivery, RTCP for call stats and so on. VOIP introduced a whole slew of new features, it was certainly transformational. Wideband audio stepped up audio quality. No longer just a simple phone, it became a computer masquerading as a phone. Though interoperability issues did rear their head. Different vendors’ VOIP platforms didn’t talk to each other readily. Serving as the intermediary is the PSTN. Multiple codec standards require transcoding to stay in sync (codec, which stands for coder-decoder, converts audio signals into a compressed digital format for transmission and back into an uncompressed format for replay).

Today the PSTN appears to be transitioning to an all IP network. How beneficial this change will be for customers is not really that clear though. On the other hand, carriers do stand to make several gains from the shift: Cost reduction, easing of tech staffing challenges, and so on. Then there are the regulatory advantages that carriers may be seeking. Regulations that have thus far enabled all Americans to have access to telephone communications regardless of income or where they lived.

End of the day, the unfettered ability to communicate is what this is about. Reducing the dependence on communication providers would certainly help the cause of unrestricted communication. After all the Internet is a public resource. Anyone should be able to use it in order to communicate. Providing a solution to the need for open communication seems to be the app model. Where anyone can just buy/download an app and get going.

Web Real-Time Communication (WebRTC) appears to promise this very benefit. Browser based calls with no plugins or software to install. Peer to peer communication. (Would have been perfect but for these pesky Network Address Translators (NATs) and firewalls. Need to deal with specialized protocols like STUN and TURN to overcome the NAT hurdle.) An open source technology stack that anyone can utilize. Leveraging the WebRTC APIs, one can build apps that deliver conferencing, voice/video calling, data transmission, and more. Use of SRTP (in lieu of RTP as in VOIP) ensures secure, encrypted communication. Snooping on VOIP calls is easier as the basic RTP is not secure.

Got to thank Google for making WebRTC available.

But this may not work very well if carriers sensing a loss of their phone/voice services increase rates for Internet access. Which is where the recent decision by the FCC to regulate the Internet as a utility should come in handy. Nationalizing Internet service does not seem that farfetched either. Like Medicare, Social Security… If the Internet has to fulfil its role as a catalyst for innovation, everyone needs to have access to it, not just those can afford it.

Ultimately communication should be a commodity, freely available to all. Much like web access, all that is needed should be an Internet connection. Commoditizing communication seems to have been Google’s strategy, clearly a brilliant one at that. Basically we are talking about bring your own, high quality communication system (BYOCS?). Making the technology stack available to the large community of web developers instead of relying on the smaller number of telecom and VOIP engineers. There appears to be close to 3 million people on LinkedIn with JavaScript in their profiles. Should really open the innovation tap, adding new and exciting ways to communicate. We are entering the world of real-time communication systems. That the phone as we know it will stay in its current form factor seems unlikely. Instead it will likely be a bunch of apps that could run on any device, enabling anytime communication from anywhere. How about making calls from the refrigerator? Hopefully app developers will not drop the ringing tone in order to signal an incoming call. Retaining the happy, familiar sound should provide some continuity and reassurance as we march into the new communications order.

Standard
Uncategorized

Slow ride (on the IPv6 train), take it easy

Imagine riding a train where no one knows when it will reach its destination. Or if it will ever get there. Or worse still if it could even get re-routed along the way to a new destination. And if matters weren’t bad enough, this is the only train to get you where you are going. So you have no option but to take your chances. And ride along. Sounds like a bad dream. Nonetheless that is what the move to Internet Protocol version 6 (IPv6) has often seemed like.

Proposed in 1995 and adopted in 1999, IPv6 adoption has been moving at a considerably less than glacial pace (bit of irrelevant trivia here: the fastest glacier is apparently in Greenland moving over 40 meters in a day). Reliable, error free, efficient end-to-end communication between uniquely identifiable end nodes was the idea with which the Internet (TCP/IP) was conceived. That required each end point to have a unique address. This was all very well during the early Arpanet days. IPv4 was developed for the Arpanet in 1978 with a limit of 4 billion addresses (2 32). Trouble is the 4 billion number quickly turned out to be inadequate. Fast forward to the 2000’s and shortages began to show up. In 2011, the IANA free pool hit zero. Four of the five Regional Internet Registries have run out of addresses. ARIN, which includes the US and Canada, has been the last of the RIR’s to exhaust “IPv4 supplies.”  On the other hand with its 128 bits, we should always have enough IPv6 addresses to go by.

Ideally by now IPv6 should have been waiting in the wings, ready to step in, and take over. But that has not come to pass. Instead IPv6 adoption crossed eight percent in August this year per Google. Therefore about 92 percent of the Internet is still on IPv4. Which means demand for IPv4 is not going away any time soon. With this demand vs. supply imbalance, we should be seeing prices for buying IPv4 addresses in the “secondary market” on the rise. Folks holding on to IPv4 blocks likely have a nicely appreciating asset on their hands, at least in the short term. Should make CFOs happy!

Unfortunately IPv6 was not designed to be backward compatible, which means that the two versions 4 and 6 cannot directly talk to each other without a translator in the middle. If that weren’t the case perhaps IPv6 would have been phased in much sooner. For implementing an end-to-end IPv6 communication, both the end points as well as the intervening network have all got to support IPv6, a requirement that made the transition much harder.

Consequently an IPv4 future is what lies ahead for the next few years. Between the rates of IPv4 depletion and IPv6 transition, there has been quite a mismatch. How has the world been coping? Enter Network Address Translation (NAT) to the rescue (among other workarounds). We started off with NATs at the customer premises. These translated public IPv4 addresses to private addresses inside the customer network. Solved the problem to some extent as not everyone needed a public IPv4 address at least initially. Yet that was not enough to meet surging demand. Then ISPs introduced the next layer of NAT at their end, the Carrier Grade NAT or CGN. Essentially what it did was to assign private addresses to customers and map these to public addresses at the upstream ISP end. Two levels of translation, one on top of the other. Basically ISPs started sharing a single IPv4 address among multiple customers (could be hundreds and even thousands of customers behind a single address). Looking at the packet header, there was no way to identify the user behind it. Only the ISP running the CGN had the information to map packets to individual users. Some law enforcement implications here for sure.

Apparently the CGN solution has not been a big problem for the regular web users: email, web browsing and so on. Where the CGNs have significantly impaired performance has been in things like VOIP, video streaming, and online video gaming (all very popular, growth oriented use cases). What is the benefit to the ISP? One, they can postpone replacing thousands of boxes across their network. Two, extending the life of IPv4 helps realize gains from their own IPv4 stores (estimates run between $10 and $15 per IP address, these numbers should be on the climb). As an aside, trading in IPv4 addresses could become quite profitable. Three, vendors of the expensive CGN boxes certainly have an incentive to keep pushing more of these. A sobering possibility that arises is that the investment in CGN itself acts as a further disincentive to move on to IPv6. Indeed some ISPs might be tempted to just keep on adding the CGNs and pretty much put off IPv6 for a long time.

All this appears to have some parallels with the Y2K shift. Applications needed to be changed to support the four digit date field. There was the need for investment. Of course there was a hard, impossible to avoid deadline. Like it or not the Year 2000 was going to arrive! No workarounds were possible. Hence the move to Y2K was implemented in a timely manner and the year 2000 went off without a hitch. In the case of IPv6, there is no such hard deadline. No pressing need to make the switch. If it can wait, then it will wait. Running out of addresses should have provided the urgency but that was neutralized by NAT boxes among others. There has even been the suggestion that the IPv6 shift could possibly get derailed and converted instead into a CGN transition. Though that does seem to be a remote prospect.

Meanwhile the IPv6 train continues to chug along albeit at a slightly faster pace now. Destination still not in sight. As a virtue, patience for the passengers is strongly recommended. Real slow ride after all. Might as well sit back, relax, take it easy, and enjoy it while it lasts.

Standard
Uncategorized

Soft walls of software…just as hard

Mobile broadband is much loved by all. Just can’t get enough of Netflix and YouTube. Then there are the things we all depend on in our daily lives (e.g. refrigerators, cars, microwave ovens …). Specifically we are referring to the Internet of Things (IoT). More and more things are getting connected (which in turn makes them “smart data emitters.”) Frequently cited is that there are 1.5 trillion things on the planet that could have an IP address. Only 15 billion of these are believed to be connected to the Internet today. In 2020, that number is expected to go up to 50 billion connected devices per Cisco. A huge number in and of itself. But still plenty of scope for further growth.

That network traffic is likely to skyrocket is therefore an understatement. One of the major mobile carriers is reported to have experienced 100,000% growth in wireless traffic between 2007 and 2014. How do the mobile carriers keep up? Current networks are clearly likely to give up in the face of this unrelenting traffic growth. Radical solutions are necessary. Network Functions Virtualization (NFV) appears to be one of the places where the industry has found an answer. Telcos are actively looking into NFV adoption today. Cloud service providers will probably be next, with enterprises likely to be the last to the NFV dance.

Essentially NFV is about replacing specialized, custom boxes with software (Virtual Network Functions or VNFs) that can run on standard, commercial servers under a hypervisor. As in many other areas, this is yet another example of a “hard” offering that is turning “soft.” Software Defined Networking (SDN) is part of the solution. Though NFV appears to be holding out more immediate possibilities. Despite SDN having come first and NFV later, NFV has been the first to take off. SDN will possibly need more time.

Most certainly NFV is a fundamental shift. We are talking about software-based routers, switches, and middleboxes that could simply be downloaded, installed, and used. It appears that telcos were able to force this change on the vendors. For the current, “entrenched” players, NFV represents a huge impact to the P&Ls so they must have definitely needed some cajoling and coaxing to come along. Again the telcos clearly have the buying power to make it happen.  Possibilities seem immense. Someday perhaps mobile networks could be instantiated and run in the cloud anytime, on demand, by just about any one.

So this is definitely causing upheavals in the vendor world. Conceivable it is that at some time in the future we should see the number of suppliers of VNFs start to grow rapidly. After all it is now just software. Any good developer should be able to build it. And there is an army of developers out there: Asia, Europe, Latin America, you name it. Prices should therefore start to fall.

Also anyone should therefore be able to procure the VNFs, connect them, and set up a network: in theory at least. Consequently the telco market itself should be the next to be affected. We should be seeing a whole new set of telco operators, emerging players, and what have you. All of which is just about in time. With the demand for mobile broadband starting to break through the roof and the IoT coming upon us, we will definitely need all the mobile networks we can lay our hands on.

However there still are some pitfalls. Corralling together the VNFs to form the ecosystem is the NFV platform. Major vendors appear to be setting up somewhat independent ecosystems based on their own platforms. These could collapse into independent silos with proprietary hooks, which pretty much puts the operators back where they had started from. Two steps forward and three steps back. Capex reductions do sound like a sure shot. On the other hand, reducing opex could perhaps be trickier. Opex reduction should be a key litmus test for NFV success.

Which brings up a related paradox. Smooth NFV deployment will likely happen with a single vendor at the helm. Yet that would almost guarantee the introduction of a certain degree of lock-in. Furthermore running virtual and physical networks side by side is another colossal challenge. No wonder that existing vendors of physical equipment are jockeying to manage/orchestrate the virtual network with the irrefutable argument that interoperability between physical and virtual worlds will be easier to achieve. Keeping the implementation truly open certainly promises to be an uphill battle.  Hardware may have given place to software but the soft walls may turn out to be no pushover.

Standard
Uncategorized

Seesaw, It’s about control, Yo. Infra shall have a new master

Converged infrastructure and hyper-converged infrastructure have been all the rage for the last few years. Most major vendors are in it including “independent” ones like Nutanix, SimpliVity, among others. Their promise is simplicity: easy to deploy, easy to configure, easy to manage. Certainly it sounds appealing and many buyers appear to be opening their wallets.

However there seems to be some confusion around what the words exactly mean. The consensus looks like converged systems represent discrete compute, storage, and networking sold as a single SKU or on the basis of a reference architecture with all the components guaranteed to talk to each other. Whereas hyper-converged systems are about pre-integrated pieces in which everything is tightly connected and parts cannot be sold separately, an appliance form factor, with a management layer on top. A lego brick like building block approach. Though being preconfigured does remove some flexibility. Smaller blocks could perhaps help, a need that some of the vendors appear to be meeting.

Confusion is probably being caused because the terms are primarily vendor driven. No standards bodies specifying what convergence or hyper-convergence mean, which does sound like potential for significant interoperability challenges down the road. Not surprising then that vendor lock-in is frequently mentioned as an outcome.

Let us step back: The concept is not really new. There has been talk in the past of modular data centers, data center in a box, cloud in a box, appliance, and what have you. Nevertheless the vendor-driven hype (pun unintended) around converged infrastructure and hyper converged infrastructure seems to have reached a crescendo. Possibly some of it has to do with the move to software defined networking and software defined storage (an overused term in and of itself) based on commodity hardware, software switches and routers, white box components, open source, and so on. Clearly SDN (and SDS in its “classic” sense) hold the promise of significantly lower costs, hugely greater flexibility, and more importantly control over the infrastructure. From the customer’s perspective, it is all positive provided they have the skills to put it together.

Yet from the infra vendors’ point of view, SDN and SDS probably do not look that rosy. Having the buyer in control is likely not a good idea. Revenue negative or neutral at best. Long term, SDN definitely appears to represent a threat. So figuring out a new way to sell boxes would be critical. And converged and hyper converged infrastructure certainly seem to be a smart way to counter the SDN threat, create a wave around something, and drive revenue growth. Selling boxes the old way no longer looks to be applicable. If a new future is inevitable certainly it makes sense to try and control it.

That customers are looking for increased agility and ways to manage their project specific workloads where everything is business driven, application driven is the message. Putting the app on top is what SDN is all about. So the hyper-converged vendors appear to have gotten onto that bus with the claim that their solutions are what is needed to align infrastructure with applications. They seem to have co-opted the “software defined” messaging.

An aside. There has been talk about how 70% of IT spend is “keep the lights on” and how converged infrastructure will change that dynamic, raise spending on innovation. Perhaps spending on converged infrastructure boxes is a proxy for innovation spend!

To digress a bit, parallels appear to exist between the move to converged systems and the earlier shift from custom software to standard, packaged software. Pre-built functionality, the promise of faster deployment, on one side with the loss of flexibility and control on the other. Flexibility and change was a challenge with standard software and the same story seems to be playing out here. Another familiar issue is the whole thing of people buying more than what they need: excess functionality in the case of software and additional boxes in the case of converged infrastructure. Buying too much hardware could have an environmental impact but that is another story.

End of the day, it should boil down to the little matter of control. Software systems that can drive any choice of storage hardware at the backend, putting buyers in control. Open source software options would be even better. Bring your own storage is what it will be. Power to the buyer: The seesaw shall tip to the customer, the “new master.”

Standard
Uncategorized

SDN Change: Soft Battles, Hard Wars

Attention: Software Defined Networking (SDN) may be coming to a data center near you.

Agility and flexibility is what SDN is bringing to a tech area that has been somewhat divorced from the rest of computing till now. As Scott Shenker of UC Berkeley put it, many of the fundamental abstractions that are common to the application, database, and server areas were largely missing from the networking space. So far it has been all about lots of protocols. The data plane where routers and switches forward traffic has had its “layers 1 through 7” abstraction. But the control plane where forwarding logic (that decides which packet goes where) resides has been embedded in the device until now, which has perhaps resulted in a more device centric, “myopic” view of the network. By separating the control plane and putting it in a central software controller that enables programmatic access, the network as we know it has gotten transformed. Control is no longer at the individual device level but it is at the network as a whole.

Change is what SDN is bringing, and with change comes resistance to change. Several conflicts are under way.

First there are the “market wars.”

Between the old guard of vendors – Cisco, Juniper et al – and the new guard comprised of “upstarts” like Pica8 and Big Switch as well as VMware. Cisco has its Application Centric Infrastructure (ACI) while VMware has its NSX. Still ACI seems like a more comprehensive SDN solution, which is probably not surprising as Cisco is likely to have a better handle on networking per se than most other players. Clearly Cisco has the most to lose and so they can be expected to go all out to protect their turf.  Market share although has been slowly sliding. Cisco’s port market share fell from 70.8 percent in 2013 to 66.1 percent in 2014, a loss of 4.7 percentage points (www.bradreese.com/blog/3-18-2015.htm). However in absolute terms a 60+ percent share is not something to be sneezed at. Evidently Cisco still owns most of the data center.

Then there are the “people battles.”

In every data center there are the network administrators who manage the network and system administrators who manage the servers. An uneasy truce has prevailed so far. SDN promises to upend this situation. When proprietary networking hardware starts to move to commodity servers then it is likely to be advantage to the server guys.

As applications take on more network intelligence in order to program what network resources they need, application developers could take over the role of network admins. Therefore the application developer vs. network admin is yet another battle brewing.

Also looming ahead is the issue around the skills of current networking pros polished over several tens of years of managing routers, switches and so on. These are folks who have invested entire careers learning the ins and outs of Cisco and Juniper gear. Some of this knowledge is likely to become redundant as from knowing how to configure the network, the emphasis will start to shift to how to program the network.

At this point, one could hardly fault the network admins for feeling somewhat besieged. Reskilling is certainly an option but how many can turn on a dime and learn programming? When you have spent years training your mind to think in a particular way, doing an about-turn is easier said than done. Fortunately for the network techs, change is likely to take several years, not months or days. In most cases, it will possibly be a gradual shift with both old and new gear coexisting for a while.

However things can no longer be the same.

Cisco appears to be stepping up to align their training programs for the new world of networking. Certainly that would seem to make sense for Cisco. There are about over 2 million Cisco networking certified individuals today (source: May 2014, FierceWirelessTech story). Clearly these are people who are much more likely to vote for Cisco over other alternatives. Helping retrain them and prepare them for the SDN paradigm would appear to be in Cisco’s best interests. Reskilling these legions of pros looks like a smart move for Cisco.

Attrition though is a looming threat. Per a recent statistic in an August 2014 IDC research note: “27 percent of enterprises and 34 percent of cloud providers said they would be able to reduce the size of their network teams as a result of new technology and collaboration between other parts of the IT team.”

End of the day, SDN’s success seems to boil down to finding the right people who know how to get it done. Those who have expertise in programming networks and also understand today’s networking equipment would certainly be a great asset.

Ultimately the winner of the “hard” market wars could be the player that gets a handle on the “soft” people battles. That winner could very well be Cisco.

Standard