On the Bitcoin front lines

It all began with a dramatic vision. That of a digital currency that would replace fiat currency, a currency of the people, not controlled by central banks, with no intermediaries, driven by the power of the network. The author of this vision was someone who went by the name of Satoshi Nakamoto (who remains anonymous to this day). Out of this vision was born Bitcoin.  A currency consisting of digital coins where a coin is a transaction or perhaps it might be more correct to say that a transaction is comprised of one or more coins being transferred (For example: Alice pays Bob x bitcoins). Transactions have inputs (coins coming in) and outputs (coins going out). Each transaction is identified by a hash of its contents as well as the digital signature of the sender, and is defined via a scripting language.

To make sure everything is hunky-dory, transactions need to be validated. For that, they are announced to the network where the validators or miners vet them and add them to a block of pending transactions. However, for the block to make it into the blockchain, miners need to compete by solving a proof of work puzzle using a so-called nonce value and the hash function. Something that requires significant computing power. The first miner to solve the puzzle gets the privilege of adding the new block to the chain. And a reward in terms of a set number of bitcoins, which is currently 12.5 coins, the coinbase transaction. This is the only way that new bitcoins can enter the system. Others are left with no option but to fall in line. They quickly verify the solution, which is to say they confirm that the nonce satisfies the proof of work rule, add the new block to their copy of the chain, and start trying to form the next block. Blocks are strung together by including a hash of the previous block in each new block.

Occasionally there are forks in the blockchain when two or more miners solve the puzzle at the same time. However, one of the forks eventually becomes longer and becomes the working chain. Blocks in the shorter chain are returned to the network as pending transactions to be picked up by miners and the cycle repeats. Nodes are the folks who participate in the Blockchain, buying and selling coins, they could be individual wallets or coin exchanges. No bank or intermediary in this process, just the network of nodes and miners. The total number of bitcoins that can be issued is capped at 21 million. No more. No quantitative easing possible!

Aptly named the Genesis block, this very first block from Satoshi injected all of 50 bitcoins into the system. Bitcoin took off. From the thousands in 2009 (when the currency was launched), the number of transactions per month grew to the several millions per month in 2017. In the beginning, each coin had zero value, thereafter Bitcoin grew steadily in value, and hit parity with the USD in 2011. Since then, Bitcoin has grown erratically in a series of ups and downs, reached an all-time high of $3,000 on June 12th, 2017, and stayed around the $2,500 level since then. Nothing less than phenomenal in its overall growth.

Success of course breeds imitation. Many imitation bitcoins followed. Enter the altcoins. Per Wikipedia, there were more than 710 cryptocurrencies available for trade in online markets as of July 11, 2016. Of which, 667 were based on Bitcoin’s code, which is 94%. The top 10 coins are Ethereum, Ripple, Litecoin, Dash, NEM, Ethereum Classic, Monero, Zcash, Decred, and PIVX. Ethereum is the top contender backed by big powerhouses: J.P Morgan Chase, Microsoft and Intel. Interestingly, Ethereum had a split (hard fork): resulting in an eponymous Ethereum and Ethereum Classic.

But Bitcoin is the leader. It has yet to experience a hard fork. While competing altcoins have, Ethereum is a case in point. When Bitcoin drops, others tend to swoon. Ethereum crashed to 10c this week (on June 21st) though that doesn’t seem to have had anything to do with Bitcoin. Many altcoins have been getting launched through initial coin offerings ICOs. Some scams in the mix too.

Not to say that Bitcoin has been perfect. With millions of users, daily transactions have grown to the hundreds of thousands. In 2010, a block size limit of 1 MB was introduced to prevent potential DoS attacks from large blocks crippling the network, limiting the throughput to seven transactions per second. Waiting times on transactions have gone up to hours and even days. Getting miners to process transactions faster has required paying them more fees to attract their attention.

Initial solutions revolved around increasing block sizes, a hard fork, which would make older versions of Bitcoin software incompatible. Today, there are two competing solutions. Bitcoin Unlimited (BU), a hard fork, and SegWit, a soft fork. BU turns over power to the miners and mining pools by allowing them to set block sizes. SegWit instead proposes detaching the signature from the rest of the transaction data and moving it to a separate structure at the end of the transaction. This also prevents users from modifying the transaction id to get more coins from the sender as that would nullify the signature (transaction malleability).

A solution that takes Bitcoin throughput to an entirely new level is the Lightning Network. Essentially two parties create a ledger entry in the blockchain and then transfer funds at the “speed of lightning” by opening an off-block payment channel between them. These lightning transactions are seen only by the counter-parties. When the transfers are over, the final state is then confirmed on the blockchain through the regular consensus method. Reminded me about how a mobile app continues to function offline and syncs up with its server when connected to the network. The result is near-instant transactions, thousands to millions per second, that are free or cost just a fraction of a cent. Clearly, a huge, huge game changer. Lightning Network and Segwit go hand in hand as Lightning needs the malleability fix that SegWit provides.

SegWit clearly appears to be a better fix for Bitcoin scalability. Unlike BU that puts power in the hands of miners and mining pools. The very centralization that was anathema to the Bitcoin founders.

To be clear, Bitcoin presents a threat to the current monetary system based on fiat currencies. With SegWit and Lightning Network, Bitcoin could likely become a mainstream currency itself. Not an outcome that would make the stakeholders in the current monetary/financial system happy. That probably explains why there is this recurring drumbeat that Bitcoin is not what matters, it is the underlying Blockchain technology that does. There should be a few powerful people who would probably like to see Bitcoin’s slow but inevitable demise. A BU driven hard fork seems to lean in that very direction. Raises questions about who is behind BU. The battle lines are drawn.

ransomware, Uncategorized

WannaCry Wannabes Ahead

Considering the turn of events in the WannaCry ransomware outbreak, here is what it appears to look like from a high level:

  • Based on a vulnerability in Microsoft’s SMB protocol, the NSA develops an exploit called “Eternal Blue.”
  • After hacking the NSA, the Shadow Brokers get hold of artefacts for Eternal Blue and other NSA exploits.
  • Microsoft releases a patch for the SMB vulnerability in March for supported Microsoft Windows platforms.
  • An announcement is made by the Shadow Brokers that they have obtained this trove of NSA tools and exploits.
  • The group tries to auction off the proceeds, fails, and then selectively dumps some of the tools for free.
  • Hackers get hold of the tools and launch the WannaCry exploit in the beginning of May, a worldwide outbreak.
  • Folks using older versions of Windows not covered by the March patch and those who had not applied the patch are all especially vulnerable.
  • Microsoft provides an emergency patch for the versions of Microsoft Windows that are past their de-support deadlines.
  • A “kill switch” is discovered where the malware depends on an unregistered domain to carry out its work. Registering the domain deactivates the ransomware exploit, for now…

Essentially, the NSA cache seems to have turbocharged what was a “regular piece” of ransomware malware to enable it to spread like wildfire.

As an aside, it could be assumed that Microsoft was informed by the NSA about the hack, prompting them to come out with the March patch. Additionally, something that could potentially be asked is did the NSA discover the SMB vulnerability on their own or were they cued about it. Communication could, after all, flow both ways.

Taking a closer look at ransomware itself; where it appears to stand out is its finely-tuned use of cryptography to carry out its handiwork. A public and private key pair is generated for each infected machine, the public key is sent to the client machine, sets of private keys are generated and used to encrypt files on the target machine, and the public key is used to encrypt the private keys. At which point, the files are locked out, irredeemable without the use of the private key lying on the command and control server. It seems to be all about the use of keys, encrypted files stay on the machine, but remain inaccessible. It is almost like someone locks you in at home and demands a ransom to open the door and set you free.

So long as there are vulnerabilities to be exploited, clearly there will be no shortage of malware and exploits. Given that it is not just the hackers who are interested in vulnerabilities but also the powerful governmental spy agencies, we can expect no letup in such attacks.

NSA toolkits appear to have had exploits for many more vulnerabilities including for the Swift network and Linux systems. From an earlier release, firewalls from Cisco, Fortinet, and Juniper seem to have been among the targets with the tools enabling remote code execution and privilege escalation. Getting a foothold in a network and sending files to a target system were some of the other hacks that were reported. And there could be many more that the Shadow Brokers are yet to release.

Therefore, it is entirely conceivable that we could be seeing more such attacks that are inspired by NSA material. Several WannaCry wannabes may be in the works, more turbulence ahead.


To secure or to unsecure: a VPN question

“The wood is full of prying eyes.” So is the Internet. What does one do to get away from them all? After all the IP protocol was not designed with security in mind. They didn’t need to worry much about security during the Arpanet days.

When it comes to ensuring secure communications on the modern Internet, VPN tunnels have been the way to go for enterprise users. There are several options. First off, you have IPSEC VPN tunnels if you are looking to connect entire networks or subnets to each other. Then there are the SSL VPN tunnels that come in handy if it is a specific server or application or some other resource that you need to reach. If you are looking to tunnel through an incompatible network then the GRE tunnel would be a good option, with IPSEC bringing in the additional security layer. IPSEC came on the scene first with an entire suite of protocols: IKE, AH, ESP. Within the IKE protocol, keys are exchanged and parameters are negotiated. IKE Phase 1 establishes the management tunnel and Phase 2 sets up the IPSEC tunnel through which data is transferred. Data in the tunnel is secured using either the AH or ESP protocols. IPSEC is complex. Indeed, there are also some concerns that the complexity was intentionally introduced to hide cybersecurity flaws. But that is another story. On the other hand, SSL VPNs provide remote access to users via SSL VPN gateways. SSL has enjoyed wider adoption being less complex and needing just a web browser at the client end, with plug-ins for establishing the tunnel mode.

VPNs have been in the news lately. Cisco firewalls used to run VPNs were the subject of an NSA exploit. Through an attack targeting a weakness in the implementation of IKE, keys used to encrypt communications could be extracted. In the meantime, there have been some interesting developments around Juniper firewalls. It seems that the encryption algorithm was “intentionally” weakened to install a backdoor into the device so that eavesdroppers could tune into the encrypted communications taking place. Similarly, Fortinet firewalls were discovered to have a vulnerability that could be exploited with a script to gain administrator level access. At Palo Alto Networks, through a buffer overflow in their SSL VPN web interface, restrictions to bypass limit traffic to trusted IP addresses could be abused.

Looks like a case of backdoors galore.

From the enterprise world, the technology made a leap into the consumer world to meet the ever-increasing demand for privacy and safety as well as work around the geo-restrictions to media access globally. Therefore, the market for VPN services seems to have grown dramatically with several providers competing to win customers. Though, there are concerns that have been expressed about privacy. A study of 14 popular commercial VPN providers found 11 of them to leak information including the websites being visited and the content being communicated. It is said that VPN providers could potentially log their customers and that all they do is to provide a VPN proxy server. A lot depends on trusting the VPN provider. Certainly, it may not be difficult for the provider to listen to the communication going through their servers. Another vulnerability that was reported could enable attackers to unmask the real IP addresses of client devices, definitely a big problem when hiding their IP addresses is why users sign on in the first place. Also, many service providers use OpenVPN, which was the subject of the infamous Heartbleed exploit, again a case of keys being exposed through a hack. Some providers leverage outdated protocols like PPTP that can be broken through brute-force attacks.

Consequently, Internet privacy clearly has been turning into an oxymoron for a while now.  When VPN devices and services whose raison d’etre is security and privacy have been readily exploited, in circumstances that often look incriminating, it becomes a case of you can “run but you cannot hide” on the Internet. Unfortunately, there is no escaping from those pesky prying eyes. A question some enterprise buyers may have asked is did they secure their network or potentially un-secure it by installing expensive VPN appliances.


Everything is not what it seems

There is something comforting about the padlock icon on a browser when visiting a “secure” web site using HTTPS. A confidence that the traffic is encrypted and therefore cannot be snooped on by strangers, or worse by hackers/cybercriminals. Indeed, SSL/TLS, the protocol for secure client to server communication, has rapidly increased its footprint in the Internet. The classic SSL/TLS handshake for establishing a new session is pretty neat and elegant. Public key encryption for exchanging pre-master secrets followed by encrypted information flow through symmetric keys. HTTPS pages constitute 42 percent of all web visits today according to a cited statistic. In 2016, more than two-thirds of North America’s Internet traffic was encrypted, according to another research source.

Further, the Let’s Encrypt service from the Internet Security Research Group is accelerating adoption of encryption in order to deliver SSL/TLS everywhere by providing a free, automated, and open-source certificate authority. Google too seems to have thrown its hat into the ring. It is believed that Google gives higher marks to websites that use encryption while penalizing those that don’t.

So there is definitely a push for more and more SSL/TLS.

However, there is the flip side to this matter. Encryption has become a handy tool for the bad guys for slipping in malware disguised within encrypted information flows, thereby evading detection by typical enterprise network defenses. More than 25 percent of outbound web traffic is said to be encrypted while as much as 80 percent of organizations reportedly do not inspect their SSL/TLS traffic, making it easier for hackers to use SSL/TLS to cover their tracks. Per Gartner, more than half the network attacks targeting enterprises in 2017 will use encrypted traffic to bypass controls, and most advanced persistent threats already use SSL/TLS encryption.

An example of the perils of encryption is ransomware. Public keys were intended to encrypt data in motion between clients and servers as part of SSL/TLS. Enter the ransomware band of digital pirates who turned that around to encrypt data at rest on the victim’s machine. Then on, the private key became a weapon (AES 256-bit encryption, no less) to hold computers and their valuable data hostage. Pay up or walk the plank, the choice is yours, was the message! And pay up, many did. Providing a lucrative business model to the perpetrators. Also command and control servers are increasingly using encrypted communication to control malware on the network and unleash their botnet armies for exploits that include data exfiltration and DDOS attacks.

So far, there has been the regular Internet that we all know and the so called dark Internet or dark Web, the nefarious digital underworld. When you look at enterprise networks, there are the regular, expected information flows. Then (potentially) there are the clandestine flows between compromised computers: that includes both east west traffic as the malware spreads within the network as well as north south flows to the controlling servers. Black and white is what the picture appears to be, on first glance.

Though today, black and white seems to be converging to a shade of gray. Traffic that looks benign but isn’t. Digital certificates that seem trustworthy but aren’t. Emails that appear to be legit but are phishing, spear-phishing attacks instead. End points that seem to be regular and valid but are instead compromised nodes that are sending out sensitive information.

Into this cybersecurity concoction, add the public cloud. Providers like Amazon, Google, and Dropbox are known to convey security and trust. Clearly their cybersecurity defenses are second to none. But when you have hundreds to thousands of tenants, it is hard to keep up. Spinning up VMs in the cloud is a convenient tool for folks running Command and Control centers and distributing malware. Indeed, 10% of repositories hosted by cloud providers, including some on Amazon and Google, are said to be compromised. Certainly nothing like the cloak of cloud-based familiarity when it comes to hiding cybersecurity exploits.

When it is the likes of Amazon and Google that you are dealing with, everything is expected to be hunky dory. After all the cloud is a foundational pillar of the increasingly digital world we are heading to. Nevertheless, with the rapid increase in Shadow IT and everyone signing up for cloud services will-nilly, it is certainly tough for enterprise IT to stay on top of the goings on.

For the security professional, zero trust is therefore becoming the operative word. Perimeter-based security is going the way of the dodo. Trust but verify is the slogan being adopted by one and all. Ironical that it is the translation of a Russian proverb. Evidently, things are not what they seem.


Forgive me for I have syn-ed

Is what a successful Denial of Service (DOS) or Distributed Denial of Service (DDOS) attacker might say after taking advantage of syn-ack vulnerabilities in the TCP/IP handshake (the method used to set up the internet connection before you can start using your favorite website or gaming site). Not just one or two sy(i)ns but an entire flood of them. Enter the Syn Flood Denial of Service attack. When a client’s acknowledgment of the server’s response to a new connection request is never provided, open connections are left in its wake. The power to deny service to web resources is what the DOS attacker wields bringing many a powerful company to its knees. PayPal, Bank of America, and many more learned this truth the hard way in the past.

To say that the Internet is rife with vulnerabilities is clearly stating the embarrassingly obvious. Take the system of digital signatures and certificate authorities for example. Once a unit of software is digitally signed by a certificate issued by a well-known certificate authority, it is deemed completely trustworthy. But several stolen digital certificates later, malware signed with perfectly valid certificates has become a reality. Trust can only go so far.

As an aside, it is almost a miracle that so many of us readily flash our credit cards on websites resting assured in the confidence that the credit card provider will pick up the tab if cards get misused. If that confidence should turn out to be misplaced, I doubt any of us would proceed to shop online so freely. After all, there are several men-in-the-middle who would be happy to intercept public keys in transit and substitute them for their own, comfortably reading/modifying any and all traffic passing through them.

Today it has so easy to become an attacker/hacker really. Tools for launching attacks are readily available enabling anyone to get started on a path of online power. Willing and unwilling accomplices are in the plenty. When attackers are backed by the power of an entire nation state, the potential to inflict damage is simply gargantuan. Going from isolated websites and web servers to wide cross sections of the Internet and more.

Consider the latest DDOS attack on the DNS provider Dyn overwhelming its DNS servers with a flood of packets unleashed by an army of botnets formed from Internet of Things (IoT) devices. Network World reports that it was a TCP Syn Flood attack. An attack aimed at the Internet infrastructure provider, it literally brought a broad swath of the Internet to a standstill.

And this was aimed at just one Internet provider namely Dyn. Imagine a coordinated attack that targets a larger number of Internet infrastructure providers. That could perhaps push the brakes on the entire worldwide Internet, not just slowing down the US East Coast.

The DNS seems to have become one of the many significant weak links in the entire Internet system. After all, if the servers that resolve IP addresses to domain names become unavailable, it is not possible to go to the place you want to go to. Making the Internet unusable. With features that readily enable both attack reflection (using DNS servers to send responses to the spoofed IP address i.e. the victim) and attack amplification (inflating the size of the original request packet), the DNS appears to have become an unwitting accomplice in the DDoS attack. Add botnets to source the incoming DNS requests — with the Internet of Things as a ready supplier of vulnerable devices that lend themselves to botnets — and you have the makings of a truly exponential attack, a solid one-two punch. Sounds like a war of the worlds, Internet of Things vs. the classic Internet!

It may be useful to reflect on what this could potentially mean. With the overwhelming digital push and the all-around rush to the cloud, the dependence on the Internet has been skyrocketing. Per the website statistica: In 2015, retail e-commerce sales worldwide amounted to 1.55 trillion USD (approximately 9% of 2015 US GDP of 17.8 trillion USD) and e-retail revenues are projected to grow to 3.4 trillion USD in 2019. By 2017, 60% of all U.S. retail sales will involve the Internet in some way according to Forrester Research.

So taking the Internet out of commission in some form could bring much of commerce to a screeching halt. The impact on the global economy would, of course, be stupendous. Dare to say it could perhaps even trigger the next recession, which seems to be always waiting in the wings (but that is another story). This attack would indeed be the “sin of sins” that for many would be impossible to forget or forgive.


Digital march meets the WAN divide

Some of us may recall the unmistakable sounds of a modem connecting to the Internet.  Once the handshake was successfully negotiated, there was perhaps even a thrill experienced of having made it online. That was the world of the dial-up Internet connection. Exciting times back then. All of 40-50 kbit/second. Fast forward to the 25 Mbps and higher download speeds of today’s “always on” broadband Internet connections, and clearly we are in a different paradigm. Connecting to the Internet no longer seems as exciting, instead something mundane that could even be taken for granted.

Speaking of connectivity and networks, enterprise WANs have come a long way. From dial-up to T1/T3 to ATM to Frame Relay to MPLS, there has been a steady progression of new technologies. Though MPLS is said to be a service rather than represent a specific technology choice and could have ATM, Frame Relay, or even Ethernet at its core. Circuit switching to packet switching, so the transition happened. A trait of many of these WAN solutions appears to have been that they were very complex and called for specialized expertise. Also some of them even ran into the millions of dollars per month, so not cheap. The introduction of MPLS was possibly a game changer. But there don’t seem to have been very many technological advances in WAN technology over the last 15 years or so since MPLS came in.  As an aside, this appears to reflect what some have observed to be an absence in fundamental tech advances per se in the last decade or two.

In the past, all was quiet and stable on the enterprise WAN. Users were either at the head office or at branch offices. Applications resided at the corporate office data center. Most of the traffic was branch to head office through the enterprise WAN. Internet traffic would traverse the WAN to the data center (backhauled) before being handed off to the ISP then on. MPLS met this use case very well. There was perhaps no need to make significant advancements to the WAN. Everyone was “happy” with the status quo.

Then things started to change. Lots of reasons. Marked by a rising use of software, the world began to go digital. More and more, the software started to be accessed from the cloud. Public and hybrid cloud became popular choices. There was an increasingly insatiable appetite for consuming video, boosting demands on bandwidth. VDI or Virtual Desktop infrastructure installs added to the clamor for higher connectivity. Then there were the remote users who don’t work from branch offices. Not to forget the Internet of Things with its billions of devices slowly but surely coming online and hankering for even more bandwidth. No wonder that the enterprise WAN began to come under pressure.  MPLS was not geared up for such an intensely interconnected world.

Enter SD-WANs (Software Defined WANs). They promise to meet the demand by virtualizing connectivity options and offering an abstraction layer on top. So the underlying links could be either MPLS or the regular Internet. In order to optimize network reliability and availability, the software would “magically” send traffic through the appropriate link automatically, behind the scenes. There is even talk that SD-WAN could replace MPLS altogether and route traffic entirely through the Internet while meeting QoS and SLA guarantees. Though many users would probably balk at doing away with MPLS. After all who wants to take on too much risk? Therefore, the future seems to lie in a hybrid WAN architecture, part MPLS part Internet.

SD WANs therefore certainly look good from the enterprise standpoint, essentially more bandwidth and performance for significantly less outlays. A no brainer, one would say. But how about examining this scenario from the ISP end? Service providers are seeing traffic on their backbone networks go up tremendously. However much of this traffic increase is low revenue or no revenue. The investments needed to make improvements to the backbone network (some of which are based on circuit switching based equipment circa 1990s) are not matched by a corresponding rise in revenues from the investment.

Consequently, there appear to be two possibilities. Either the infrastructure investments are not made in which case performance could start to slide. Or ISPs could start to raise prices for regular broadband Internet access (assuming there are no regulatory constraints). Anyways you look at it, the model seems to start to break down.

Perhaps SD-WAN is a temporary solution, a placeholder if you will. More fundamental innovations are probably required in WAN technology to meet the surging demand for bandwidth and service levels. Clearly a new equilibrium has to be found at the intersection of demand for bandwidth from enterprises (and consumers) and the supply of bandwidth from service providers. Assumptions of unlimited broadband and unlimited cloud may not be tenable. An all-digital world may have to wait for the network to catch up, the WAN divide may yet slow the digital march.


Data storage: Sink or swim

The weather is changing. So is data storage. What is the connection? Weather inspired tech industry metaphors, for one. Starting with the cloud itself! And some more spawned by the accelerated pace of data growth. Consider “deluge” or “flood” of big data. How about being “inundated” by data?

Of course the sheer scale of data growth is lending itself to the dramatization. Here are some oft cited stats. 90% of the world’s data has been generated over the last two years is a quote from 2013. Every day, we create 2.5 quintillion bytes of data is another one. An aside: it appears that Americans and British have their own definitions of a quintillion — 10^18 in the US and 10^30 in Great Britain (per Dictionary.com). Unstructured data accounts for most of this data growth:  An analyst firm graph that depicted storage usage over time showed 70 exabytes of the 2014 projection of 80EB of storage coming from unstructured data (an exabyte (EB) is 10^ 18, which seems to match the US definition of a quintillion).

With all this action happening on the data growth front, something’s gotta give when it comes to storing it. That something includes the legacy SAN and NAS solutions based on block and file storage that are slowly but surely giving way to newer storage technologies. At this time it may be instructive to take a quick detour into the history of storage: DAS, SAN and NAS. In the beginning, there was DAS directly attaching drives to the server. Got to attach the storage somewhere after all. Then the storage moved away from the servers onto the network. NAS was the first network storage option with remote file access, arriving during the 1980s. Shifting to block storage, Fibre Channel came in during the 1990s and iSCSI in the 2000s. Accelerating Ethernet speeds have led to a decline in Fibre Channel adoption. FC folks introduced Fibre Channel Over Ethernet (FCOE) but that turned out to be somewhat of a non-starter. One of the reasons appears to have been that there were as many FCOE versions as there were vendors. RAID arrays too are falling by the wayside. Given the rate of data growth, the time to rebuild disks started to become unmanageable.

Today there are lots of terms at play in what can seem like a hyperactive, somewhat confusing market. We have hyperconverged, with tightly integrated compute and storage, and hyperscale where compute and storage scale out independently, indefinitely in a cloud-like fashion. Indeed hyperscale is what the web giants, the Googles, Amazons, and Facebooks use. To add to the confusion, a term that sounds similar is Server SAN that is not to be mixed up with the traditional SAN, this represents software-defined storage on commodity servers with directly attached storage (DAS) aggregating all the locally connected storage. Going back to the future with the humble DAS.

But hyperscale storage is where things appear to be headed. Object storage is the technology underlying hyperscale storage, which is already deployed in the public cloud. It is for storing unstructured data, which is clearly where the need is. Not suitable though for structured, transactional data, which is the realm of block storage. Hence block storage will likely not go away anytime soon. On the other hand, legacy NAS is quite another thing; Object storage is chipping away at its share since it does a much better job with the millions and billions of files.

Toward reining in the runaway growth in data, object storage seems just what is needed. However as in most things in life, there is a catch. For getting data in and out, object storage uses a RESTful API. Yet there does not appear to be a widely accepted industry standard specification for this API. As the 800 pound gorilla in the room, Amazon gets to position the S3 RESTful API as the defacto standard. Cloud Data Management Interface (CDMI) is the actual industry standard created by the SNIA. This should have been the standard that everyone aligned with, but it seems to have few takers. OpenStack Swift appears to have had more luck with its RESTful API. Therefore there seem to be at least three different standards and vendors get to pick and choose. The question is what happens when Amazon decides to change the S3 standard, which they are perfectly entitled to do given that the standard belongs to them. Presumably there will be a scramble among vendors to adapt and comply. Amazon literally seems to have the industry on a leash. Since they were pretty much the first cloud based infrastructure services provider, they could say with some justification they invented the space.

Perhaps users need to step up and use the power of the purse to straighten things out.  Unlikely we can solve this via regulatory fiat! When the data torrents come rushing in and the time comes to start swimmin’, hopefully Amazon is not the only game in town to stop storage systems from sinking like a stone.