blockchain, cryptocurrency

Bitcoin + Lightning = P2P Commerce

In the beginning, there was barter. Everyone had something to buy or sell. Transactions happened freely between regular folks. No middlemen were needed. Presumably, everyone pretty much got what they required (it could be assumed that needs were simpler back then). Then came currencies in lieu of barter and the people who issued the currencies. Currencies began to acquire power. Fiat currencies came much more recently, one could say they were created out of thin air, not backed by a physical commodity. Some people had more, some had less or even none. Income disparities were born and gradually grew. These gaps between the haves and have nots spiraled to the stupendous levels of today where the top 1% control nearly 40% of America’s wealth. It can be argued that the system of currencies helped build up these inequalities between countries and between individuals.

All through the power of the Exchange Rate Regime.

Bitcoin and other digital currencies hold out the tantalizing promise of someday going back to a level playing field. One common currency (or set of currencies) for the entire planet. No more national currencies, no more exchange rate driven schisms. Anyone can transact in digital currencies, anytime. But it is still a somewhat distant promise. Some of the things coming in the way are scale and cost of transactions. Then there is politics. But that is another discussion. Coming to scale, Bitcoin is still too slow today. Just about 3-4 transactions per second. Whereas Visa does about 2,000 transactions per second. At over 7,000 dollars and rising, Bitcoin is becoming more and more expensive. And transaction fees are simply out of whack. A few dollars per transaction depending on how fast you want it to go through, ruling out smaller transactions.

Enter the Lightning Network, a truly pathbreaking solution. It can dramatically expand the scale to millions to billions of transactions per second. Forget Visa. It will blow away pretty much everyone by a wide, wide stretch. It performs its uncanny magic by moving transactions off the Blockchain using payment channels. More importantly, it assumes that intermediaries cannot be trusted. So, no hanky-panky possible. Channels can be opened, funded, used to route transactions, and closed at any time. A channel is a 2-of-2 multi-signature address, which means that both parties to the transaction need to sign off to complete a transaction. Half-signed signatures can be freely sent around in both directions. The moment both parties sign off on a 2-of-2 transaction, it gets broadcast to the Blockchain and the channel closes.

Checks and balances are instituted through cryptography and lock-in times, with the Blockchain available as an impartial arbitrator at any time. Lock times prevent a channel from closing too soon or out of turn. Exchanging private keys used to sign transactions prevents broadcasting of invalid transactions as the counter party can use the private key to sweep all the funds of the offending party. Posting transactions to the Blockchain closes the channel and locks in the funds distribution. An entire web of nodes can be constructed to route multi-hop payments anonymously. Consider it an Internet of Money. Transaction fees are expected to be near zero. This opens the spigot on microtransactions.

But the most important benefit is probably the opening of the entire globe as a market, all the 7.6 billion humans, through a common currency. The Internet brought about global connectivity. But the prevalence of national barriers and exchange rate differentials means that barriers remain. And then there are the banks and other financial institutions who control the creation and flow of funds.

The blockchain provides a foundational layer for transactions to flow across the globe. As a layer two solution atop the blockchain, the Lightning Network hugely accelerates the movement of transactions and helps realize the original vision behind Bitcoin. Any citizen of the world can access a payment channel to transact in bitcoin or other digital currency, without having to deal with nation states and financial institutions in the middle. Possibilities are breathtaking.  Transactions can flow seamlessly, unencumbered, across borders. Frictionless.

Global trade should start to rise with everyone having the opportunity to participate in it. Not just the privileged few. Wealth equalization and growth should follow as more and more people are provided an avenue to generate wealth, with the world as their oyster. Ushering in the world of Person-to-Person (P2P) Commerce, a rising tide that will lift all boats. Let the fun begin.

Standard
cryptocurrency

Ethereum is number two, they try harder

Bitcoin came first, they are “King of the Hill.” Ethereum is number two in the cryptocurrency/blockchain world. But it is of course not an apples-to-apples comparison. Ethereum does have its cryptocurrency Ether on its blockchain. But it is much more than a distributed currency. By providing a global runtime environment consisting of thousands of distributed nodes that runs decentralized applications on the Blockchain, namely the Ethereum Virtual Machine, it is indeed transformative. Smart contracts are what these applications are called. Transactions trigger and execute functions on the smart contracts. Ethereum is powered by so-called gas, which is what a transaction needs for accessing the processing power of the Blockchain. Without gas, a transaction would come to a quick halt.

But the word contract could be somewhat confusing. There are some similarities to legal agreements. But end of the day, smart contracts are packages of code running on the EVM. Currency is just one of the applications that Ethereum makes possible. There are certainly many more. Limited only by the imagination so long as the applications are based on decentralization and shared memory. Voting, prediction, social media, asset sharing, the list goes on. The cool part is these smart contracts are totally automated, no middleman needed. Once deployed, they are unstoppable. Like a runaway train! Facebook, Twitter, and Uber are examples of organizations that could potentially get replaced by these decentralized apps on the Ethereum Blockchain. A step ahead of apps are decentralized autonomous organizations (DAOs) that are entire organizations that run on the Blockchain, controlled completely by code. They deliver on their goals with silent, purposeful efficiency. No vexing human resource management issues to deal with!

So long as they don’t get hacked. Which is what unfortunately happened to the first DAO, “The DAO.” When launched, it took off with a bang. The idea was to have token holders who become members of the DAO by investing Ether in it. Backed by its corpus of crowdfunded Ether, the DAO would get to develop proposals that are voted on by token holders. It became a wild success. To the tune of 13 million Ether. At $20 per Ether (which was about the price back then), the DAO was worth $260 million. At today’s price of about $300, it would have been worth approximately $3.9 billion! Lots of value riding on what was about 1,000 lines of code.

Unfortunately, it didn’t get to make it thus far. A hacker found a way to drain its funds using a vulnerability in its code. SplitDAO was the function in the DAO’s code that enabled token holders to exit at any time with their share of the Ether. By repeatedly calling this function and withdrawing Ether without allowing the logic to update the Ether balance each time, the hacker could get away with much, much more than his due share. He took off with as much as 3.6 million Ether (about 28% of the DAO corpus). As the Ether price soon dropped to $13, the heist became worth about $50 million.

Being a public blockchain, the developers could see the hack unfolding after it was discovered and the child DAO where the funds were going to (they called it the “Dark DAO”), they just couldn’t do anything about it right away. You could call it a robbery happening in broad daylight. Fortunately, there was an approximately 30-day moratorium on the withdrawn amount, which provided a window to consider suitable remedies.

What followed was a series of fascinating counter measures. First off, a white hat hack was carried out to move the remainder of the DAO funds to new child DAOs. Otherwise, it was quite probable that the hacker would have made off with the entire booty. Next on the list was a soft fork that would blacklist any transactions/blocks that referenced the Dark DAO. That would have stopped the hacker from moving his funds as that would have been a blacklisted transaction. Unfortunately, the soft fork would also have enabled DDoS attacks that would execute compute intensive transactions and end with a call to the Dark DAO, providing a free ride for the hacker as no gas would be charged, being a blacklisted transaction. Miners would be forced to run and discard them without getting any transaction fees in return.

Ultimately it came down to a contentious hard fork that much like a time machine rolled back the blockchain to a point in time before the hack (as if it never happened) and moved all DAO funds to a new DAO with just a withdraw function. Investors thus could get back their funds.

But the hacker did have the last laugh, in a way. Another group of Ethereum users refused to go along with the hard fork and preserved the old blockchain with the hack and all. Enter the Ethereum Classic blockchain. They were considered renegades by the Ethereum Foundation who expected the parallel fork to wither away and die without its support. But it did unexpectedly gain some miner support. A competing currency was born, Ethereum Classic ETC. Today we have ETC and ETH, both in operation. ETC is trading at about $13. Market cap is in the range of $1.2B. Whereas ETH is at ~$300, market cap of $28 billion, so it is way higher. The hacker was able to hang on to the stolen funds in ETC.

For Ethereum, the future looks intriguing and exciting. It certainly appears to have brushed off the effects of the DAO hack and surged ahead. More changes are on the anvil to make the platform production ready. Scaling up the number of transactions, decentralized storage. Thus, ushering in a decentralized world. No more middlemen looking to take their share of influence and wealth. As the number two player, Ethereum sure is trying hard to make this new world a reality.

Standard
Uncategorized

Watch out! Bitcoin fork ahead!

An eyeball-to-eyeball confrontation has been brewing in the Bitcoin world. Miners on one side. Developers on the other. The reason? Bitcoin has been slowing down for a while. Transactions take too long to confirm, hours to even days. Transactions throughput has been unacceptably low, just about seven transactions per second. That a change is needed, there is no doubt. Question is what is to be changed.

Miners appear to have had a relatively simple fix in mind. Increase the block size from the current 1MB block. Rest of the system stays pretty much the same. Not so fast is what the developers (Bitcoin Core) have been saying. Increasing the block size is a temporary solution at best. The orders of magnitude change needed to handle anticipated growth in transaction volume is much higher, calls for a much more comprehensive fix. And the correction that developers have been espousing has been available for more than a year.

Essentially, it centers around what is called SegWit. The core premise is fairly straightforward. About 99% of a block’s data is taken up by transactions. And about 65% of transaction data is the signature in the inputs used to authenticate the sender of coins. The signature is not required to actually process the transactions, only for the initial validation. SegWit recommends moving the signature data to an extension block thus freeing up significant space in the main block.

Moving the signature out of the transaction also means that it is no longer a part of the transaction hash/ID. Thus, making it impossible for a hacker to take an existing transaction, change the signature (which changes the ID), and resubmit the same transaction for approval with a new ID. This bug is referred to as transaction malleability. Fixing this bug makes it possible to run so-called Layer Two solutions on top of the blockchain. Chief among them is the Lightning Network. A transformative solution for Bitcoin if there ever was one, this would be it. LN uses a network of payment channels that run off-blockchain to send bitcoins around. It makes possible billions of transactions per second and micropayments as small as one Satoshi (10-8 bitcoin). Scalability issues would be firmly relegated to history, to be quickly forgotten. So, if IoT devices need to start making automatic payments for services, the LN offers the way. SegWit +LN appear to take bitcoin to an entirely different level and fulfil its promise as the common man’s digital currency.

But most miners did not seem to want SegWit to activate. The fear presumably is that off-chain transactions will eat into growing transaction fees, reduce their profits, while not presenting any immediate value to them in return. Of course, one could argue that SegWit protects the very future of Bitcoin; the miners who looked at the long term tended to side with SegWit.

Therefore, it has been a long battle to try to introduce SegWit into Bitcoin. A series of Bitcoin Improvement Proposals (BIPs). There was BIP 9 that made soft fork deployment easier. Miners would use so-called version bits to signal that they are ready to enforce new rules, giving a heads-up to nodes that it is time to upgrade. Activation of SegWit is through BIP 141, it requires 95% of miners to signal for it using version bit 1 per BIP 9. Unfortunately, acceptance of BIP 141 never crossed a 50% threshold, let alone 95%.

Users and developers finally said, enough is enough, and pushed through BIP 148 to nudge SegWit onto the system as a User Activated Soft Fork (UASF). Beginning the 1st of August, (just a couple of days away!), nodes that enforce BIP 148 will reject any block from miners that does not signal SegWit readiness, along with any block that is built on top of a block that does not signal support. BIP 148 was therefore a case of users upping the ante by making definitive action unavoidable. With BIP 148, miners were compelled to move. Ignoring SegWit was no longer an option.

In response, miners came up with SegWit2x.  It combines SegWit with a 2MB hard fork at a later date. A compromise that presumably could make both parties happy (though Bitcoin Core developers continued to have misgivings about the solution’s state of readiness). But where BIP141 required 95% miner support, SegWit2x only requires 80%. Moreover, SegWit2x readiness would be signaled using bit 4 instead of bit 1. However, SegWit2x was incompatible with BIP 141 and BIP 148.

Therefore, BIP 91 was introduced to make SegWit2x compatible with BIP 148. Upon activation of BIP91, all BIP91 nodes will reject any blocks that do not signal support for SegWit through bit 1 (like the original BIP 141). But for it to activate, it only requires 80% of miners to signal support through bit 4 (like SegWit2x). With BIP 91, success seemed at hand! At last! It got 90% miner support through bit 4 signaling thus locking it in.

But BIP 91 in some ways could turn out to be a “smoke screen.” For BIP 91 to activate, it requires bit 1 signaling by miners by 80% of miners. That has not happened yet, remained below 50%! Miners could instead back out of BIP 91 anytime. However, BIP 91 did look like a way out of the impasse. Finally, and not a moment too soon, with the August 1st deadline looming large! But whether SegWit would actually activate remained an open question.

Into this mix, came a stunning development! A group of miners announced their intent to break away with a new currency called Bitcoin Cash.  It increases the block size to 8 MB and removes SegWit, the bone of contention all along! Backers seem to be a few of the largest Chinese mining firms. The actual level of support for Bitcoin Cash seems unclear though. August 1st looks to be a day that will go down in Bitcoin history. What will actually unfold is anybody’s guess! Stay tuned!

Standard
Uncategorized

On the Bitcoin front lines

It all began with a dramatic vision. That of a digital currency that would replace fiat currency, a currency of the people, not controlled by central banks, with no intermediaries, driven by the power of the network. The author of this vision was someone who went by the name of Satoshi Nakamoto (who remains anonymous to this day). Out of this vision was born Bitcoin.  A currency consisting of digital coins where a coin is a transaction or perhaps it might be more correct to say that a transaction is comprised of one or more coins being transferred (For example: Alice pays Bob x bitcoins). Transactions have inputs (coins coming in) and outputs (coins going out). Each transaction is identified by a hash of its contents as well as the digital signature of the sender, and is defined via a scripting language.

To make sure everything is hunky-dory, transactions need to be validated. For that, they are announced to the network where the validators or miners vet them and add them to a block of pending transactions. However, for the block to make it into the blockchain, miners need to compete by solving a proof of work puzzle using a so-called nonce value and the hash function. Something that requires significant computing power. The first miner to solve the puzzle gets the privilege of adding the new block to the chain. And a reward in terms of a set number of bitcoins, which is currently 12.5 coins, the coinbase transaction. This is the only way that new bitcoins can enter the system. Others are left with no option but to fall in line. They quickly verify the solution, which is to say they confirm that the nonce satisfies the proof of work rule, add the new block to their copy of the chain, and start trying to form the next block. Blocks are strung together by including a hash of the previous block in each new block.

Occasionally there are forks in the blockchain when two or more miners solve the puzzle at the same time. However, one of the forks eventually becomes longer and becomes the working chain. Blocks in the shorter chain are returned to the network as pending transactions to be picked up by miners and the cycle repeats. Nodes are the folks who participate in the Blockchain, buying and selling coins, they could be individual wallets or coin exchanges. No bank or intermediary in this process, just the network of nodes and miners. The total number of bitcoins that can be issued is capped at 21 million. No more. No quantitative easing possible!

Aptly named the Genesis block, this very first block from Satoshi injected all of 50 bitcoins into the system. Bitcoin took off. From the thousands in 2009 (when the currency was launched), the number of transactions per month grew to the several millions per month in 2017. In the beginning, each coin had zero value, thereafter Bitcoin grew steadily in value, and hit parity with the USD in 2011. Since then, Bitcoin has grown erratically in a series of ups and downs, reached an all-time high of $3,000 on June 12th, 2017, and stayed around the $2,500 level since then. Nothing less than phenomenal in its overall growth.

Success of course breeds imitation. Many imitation bitcoins followed. Enter the altcoins. Per Wikipedia, there were more than 710 cryptocurrencies available for trade in online markets as of July 11, 2016. Of which, 667 were based on Bitcoin’s code, which is 94%. The top 10 coins are Ethereum, Ripple, Litecoin, Dash, NEM, Ethereum Classic, Monero, Zcash, Decred, and PIVX. Ethereum is the top contender backed by big powerhouses: J.P Morgan Chase, Microsoft and Intel. Interestingly, Ethereum had a split (hard fork): resulting in an eponymous Ethereum and Ethereum Classic.

But Bitcoin is the leader. It has yet to experience a hard fork. While competing altcoins have, Ethereum is a case in point. When Bitcoin drops, others tend to swoon. Ethereum crashed to 10c this week (on June 21st) though that doesn’t seem to have had anything to do with Bitcoin. Many altcoins have been getting launched through initial coin offerings ICOs. Some scams in the mix too.

Not to say that Bitcoin has been perfect. With millions of users, daily transactions have grown to the hundreds of thousands. In 2010, a block size limit of 1 MB was introduced to prevent potential DoS attacks from large blocks crippling the network, limiting the throughput to seven transactions per second. Waiting times on transactions have gone up to hours and even days. Getting miners to process transactions faster has required paying them more fees to attract their attention.

Initial solutions revolved around increasing block sizes, a hard fork, which would make older versions of Bitcoin software incompatible. Today, there are two competing solutions. Bitcoin Unlimited (BU), a hard fork, and SegWit, a soft fork. BU turns over power to the miners and mining pools by allowing them to set block sizes. SegWit instead proposes detaching the signature from the rest of the transaction data and moving it to a separate structure at the end of the transaction. This also prevents users from modifying the transaction id to get more coins from the sender as that would nullify the signature (transaction malleability).

A solution that takes Bitcoin throughput to an entirely new level is the Lightning Network. Essentially two parties create a ledger entry in the blockchain and then transfer funds at the “speed of lightning” by opening an off-block payment channel between them. These lightning transactions are seen only by the counter-parties. When the transfers are over, the final state is then confirmed on the blockchain through the regular consensus method. Reminded me about how a mobile app continues to function offline and syncs up with its server when connected to the network. The result is near-instant transactions, thousands to millions per second, that are free or cost just a fraction of a cent. Clearly, a huge, huge game changer. Lightning Network and Segwit go hand in hand as Lightning needs the malleability fix that SegWit provides.

SegWit clearly appears to be a better fix for Bitcoin scalability. Unlike BU that puts power in the hands of miners and mining pools. The very centralization that was anathema to the Bitcoin founders.

To be clear, Bitcoin presents a threat to the current monetary system based on fiat currencies. With SegWit and Lightning Network, Bitcoin could likely become a mainstream currency itself. Not an outcome that would make the stakeholders in the current monetary/financial system happy. That probably explains why there is this recurring drumbeat that Bitcoin is not what matters, it is the underlying Blockchain technology that does. There should be a few powerful people who would probably like to see Bitcoin’s slow but inevitable demise. A BU driven hard fork seems to lean in that very direction. Raises questions about who is behind BU. The battle lines are drawn.

Standard
ransomware, Uncategorized

WannaCry Wannabes Ahead

Considering the turn of events in the WannaCry ransomware outbreak, here is what it appears to look like from a high level:

  • Based on a vulnerability in Microsoft’s SMB protocol, the NSA develops an exploit called “Eternal Blue.”
  • After hacking the NSA, the Shadow Brokers get hold of artefacts for Eternal Blue and other NSA exploits.
  • Microsoft releases a patch for the SMB vulnerability in March for supported Microsoft Windows platforms.
  • An announcement is made by the Shadow Brokers that they have obtained this trove of NSA tools and exploits.
  • The group tries to auction off the proceeds, fails, and then selectively dumps some of the tools for free.
  • Hackers get hold of the tools and launch the WannaCry exploit in the beginning of May, a worldwide outbreak.
  • Folks using older versions of Windows not covered by the March patch and those who had not applied the patch are all especially vulnerable.
  • Microsoft provides an emergency patch for the versions of Microsoft Windows that are past their de-support deadlines.
  • A “kill switch” is discovered where the malware depends on an unregistered domain to carry out its work. Registering the domain deactivates the ransomware exploit, for now…

Essentially, the NSA cache seems to have turbocharged what was a “regular piece” of ransomware malware to enable it to spread like wildfire.

As an aside, it could be assumed that Microsoft was informed by the NSA about the hack, prompting them to come out with the March patch. Additionally, something that could potentially be asked is did the NSA discover the SMB vulnerability on their own or were they cued about it. Communication could, after all, flow both ways.

Taking a closer look at ransomware itself; where it appears to stand out is its finely-tuned use of cryptography to carry out its handiwork. A public and private key pair is generated for each infected machine, the public key is sent to the client machine, sets of private keys are generated and used to encrypt files on the target machine, and the public key is used to encrypt the private keys. At which point, the files are locked out, irredeemable without the use of the private key lying on the command and control server. It seems to be all about the use of keys, encrypted files stay on the machine, but remain inaccessible. It is almost like someone locks you in at home and demands a ransom to open the door and set you free.

So long as there are vulnerabilities to be exploited, clearly there will be no shortage of malware and exploits. Given that it is not just the hackers who are interested in vulnerabilities but also the powerful governmental spy agencies, we can expect no letup in such attacks.

NSA toolkits appear to have had exploits for many more vulnerabilities including for the Swift network and Linux systems. From an earlier release, firewalls from Cisco, Fortinet, and Juniper seem to have been among the targets with the tools enabling remote code execution and privilege escalation. Getting a foothold in a network and sending files to a target system were some of the other hacks that were reported. And there could be many more that the Shadow Brokers are yet to release.

Therefore, it is entirely conceivable that we could be seeing more such attacks that are inspired by NSA material. Several WannaCry wannabes may be in the works, more turbulence ahead.

Standard
Uncategorized

To secure or to unsecure: a VPN question

“The wood is full of prying eyes.” So is the Internet. What does one do to get away from them all? After all the IP protocol was not designed with security in mind. They didn’t need to worry much about security during the Arpanet days.

When it comes to ensuring secure communications on the modern Internet, VPN tunnels have been the way to go for enterprise users. There are several options. First off, you have IPSEC VPN tunnels if you are looking to connect entire networks or subnets to each other. Then there are the SSL VPN tunnels that come in handy if it is a specific server or application or some other resource that you need to reach. If you are looking to tunnel through an incompatible network then the GRE tunnel would be a good option, with IPSEC bringing in the additional security layer. IPSEC came on the scene first with an entire suite of protocols: IKE, AH, ESP. Within the IKE protocol, keys are exchanged and parameters are negotiated. IKE Phase 1 establishes the management tunnel and Phase 2 sets up the IPSEC tunnel through which data is transferred. Data in the tunnel is secured using either the AH or ESP protocols. IPSEC is complex. Indeed, there are also some concerns that the complexity was intentionally introduced to hide cybersecurity flaws. But that is another story. On the other hand, SSL VPNs provide remote access to users via SSL VPN gateways. SSL has enjoyed wider adoption being less complex and needing just a web browser at the client end, with plug-ins for establishing the tunnel mode.

VPNs have been in the news lately. Cisco firewalls used to run VPNs were the subject of an NSA exploit. Through an attack targeting a weakness in the implementation of IKE, keys used to encrypt communications could be extracted. In the meantime, there have been some interesting developments around Juniper firewalls. It seems that the encryption algorithm was “intentionally” weakened to install a backdoor into the device so that eavesdroppers could tune into the encrypted communications taking place. Similarly, Fortinet firewalls were discovered to have a vulnerability that could be exploited with a script to gain administrator level access. At Palo Alto Networks, through a buffer overflow in their SSL VPN web interface, restrictions to bypass limit traffic to trusted IP addresses could be abused.

Looks like a case of backdoors galore.

From the enterprise world, the technology made a leap into the consumer world to meet the ever-increasing demand for privacy and safety as well as work around the geo-restrictions to media access globally. Therefore, the market for VPN services seems to have grown dramatically with several providers competing to win customers. Though, there are concerns that have been expressed about privacy. A study of 14 popular commercial VPN providers found 11 of them to leak information including the websites being visited and the content being communicated. It is said that VPN providers could potentially log their customers and that all they do is to provide a VPN proxy server. A lot depends on trusting the VPN provider. Certainly, it may not be difficult for the provider to listen to the communication going through their servers. Another vulnerability that was reported could enable attackers to unmask the real IP addresses of client devices, definitely a big problem when hiding their IP addresses is why users sign on in the first place. Also, many service providers use OpenVPN, which was the subject of the infamous Heartbleed exploit, again a case of keys being exposed through a hack. Some providers leverage outdated protocols like PPTP that can be broken through brute-force attacks.

Consequently, Internet privacy clearly has been turning into an oxymoron for a while now.  When VPN devices and services whose raison d’etre is security and privacy have been readily exploited, in circumstances that often look incriminating, it becomes a case of you can “run but you cannot hide” on the Internet. Unfortunately, there is no escaping from those pesky prying eyes. A question some enterprise buyers may have asked is did they secure their network or potentially un-secure it by installing expensive VPN appliances.

Standard
Uncategorized

Everything is not what it seems

There is something comforting about the padlock icon on a browser when visiting a “secure” web site using HTTPS. A confidence that the traffic is encrypted and therefore cannot be snooped on by strangers, or worse by hackers/cybercriminals. Indeed, SSL/TLS, the protocol for secure client to server communication, has rapidly increased its footprint in the Internet. The classic SSL/TLS handshake for establishing a new session is pretty neat and elegant. Public key encryption for exchanging pre-master secrets followed by encrypted information flow through symmetric keys. HTTPS pages constitute 42 percent of all web visits today according to a cited statistic. In 2016, more than two-thirds of North America’s Internet traffic was encrypted, according to another research source.

Further, the Let’s Encrypt service from the Internet Security Research Group is accelerating adoption of encryption in order to deliver SSL/TLS everywhere by providing a free, automated, and open-source certificate authority. Google too seems to have thrown its hat into the ring. It is believed that Google gives higher marks to websites that use encryption while penalizing those that don’t.

So there is definitely a push for more and more SSL/TLS.

However, there is the flip side to this matter. Encryption has become a handy tool for the bad guys for slipping in malware disguised within encrypted information flows, thereby evading detection by typical enterprise network defenses. More than 25 percent of outbound web traffic is said to be encrypted while as much as 80 percent of organizations reportedly do not inspect their SSL/TLS traffic, making it easier for hackers to use SSL/TLS to cover their tracks. Per Gartner, more than half the network attacks targeting enterprises in 2017 will use encrypted traffic to bypass controls, and most advanced persistent threats already use SSL/TLS encryption.

An example of the perils of encryption is ransomware. Public keys were intended to encrypt data in motion between clients and servers as part of SSL/TLS. Enter the ransomware band of digital pirates who turned that around to encrypt data at rest on the victim’s machine. Then on, the private key became a weapon (AES 256-bit encryption, no less) to hold computers and their valuable data hostage. Pay up or walk the plank, the choice is yours, was the message! And pay up, many did. Providing a lucrative business model to the perpetrators. Also command and control servers are increasingly using encrypted communication to control malware on the network and unleash their botnet armies for exploits that include data exfiltration and DDOS attacks.

So far, there has been the regular Internet that we all know and the so called dark Internet or dark Web, the nefarious digital underworld. When you look at enterprise networks, there are the regular, expected information flows. Then (potentially) there are the clandestine flows between compromised computers: that includes both east west traffic as the malware spreads within the network as well as north south flows to the controlling servers. Black and white is what the picture appears to be, on first glance.

Though today, black and white seems to be converging to a shade of gray. Traffic that looks benign but isn’t. Digital certificates that seem trustworthy but aren’t. Emails that appear to be legit but are phishing, spear-phishing attacks instead. End points that seem to be regular and valid but are instead compromised nodes that are sending out sensitive information.

Into this cybersecurity concoction, add the public cloud. Providers like Amazon, Google, and Dropbox are known to convey security and trust. Clearly their cybersecurity defenses are second to none. But when you have hundreds to thousands of tenants, it is hard to keep up. Spinning up VMs in the cloud is a convenient tool for folks running Command and Control centers and distributing malware. Indeed, 10% of repositories hosted by cloud providers, including some on Amazon and Google, are said to be compromised. Certainly nothing like the cloak of cloud-based familiarity when it comes to hiding cybersecurity exploits.

When it is the likes of Amazon and Google that you are dealing with, everything is expected to be hunky dory. After all the cloud is a foundational pillar of the increasingly digital world we are heading to. Nevertheless, with the rapid increase in Shadow IT and everyone signing up for cloud services will-nilly, it is certainly tough for enterprise IT to stay on top of the goings on.

For the security professional, zero trust is therefore becoming the operative word. Perimeter-based security is going the way of the dodo. Trust but verify is the slogan being adopted by one and all. Ironical that it is the translation of a Russian proverb. Evidently, things are not what they seem.

Standard
Uncategorized

Forgive me for I have syn-ed

Is what a successful Denial of Service (DOS) or Distributed Denial of Service (DDOS) attacker might say after taking advantage of syn-ack vulnerabilities in the TCP/IP handshake (the method used to set up the internet connection before you can start using your favorite website or gaming site). Not just one or two sy(i)ns but an entire flood of them. Enter the Syn Flood Denial of Service attack. When a client’s acknowledgment of the server’s response to a new connection request is never provided, open connections are left in its wake. The power to deny service to web resources is what the DOS attacker wields bringing many a powerful company to its knees. PayPal, Bank of America, and many more learned this truth the hard way in the past.

To say that the Internet is rife with vulnerabilities is clearly stating the embarrassingly obvious. Take the system of digital signatures and certificate authorities for example. Once a unit of software is digitally signed by a certificate issued by a well-known certificate authority, it is deemed completely trustworthy. But several stolen digital certificates later, malware signed with perfectly valid certificates has become a reality. Trust can only go so far.

As an aside, it is almost a miracle that so many of us readily flash our credit cards on websites resting assured in the confidence that the credit card provider will pick up the tab if cards get misused. If that confidence should turn out to be misplaced, I doubt any of us would proceed to shop online so freely. After all, there are several men-in-the-middle who would be happy to intercept public keys in transit and substitute them for their own, comfortably reading/modifying any and all traffic passing through them.

Today it has so easy to become an attacker/hacker really. Tools for launching attacks are readily available enabling anyone to get started on a path of online power. Willing and unwilling accomplices are in the plenty. When attackers are backed by the power of an entire nation state, the potential to inflict damage is simply gargantuan. Going from isolated websites and web servers to wide cross sections of the Internet and more.

Consider the latest DDOS attack on the DNS provider Dyn overwhelming its DNS servers with a flood of packets unleashed by an army of botnets formed from Internet of Things (IoT) devices. Network World reports that it was a TCP Syn Flood attack. An attack aimed at the Internet infrastructure provider, it literally brought a broad swath of the Internet to a standstill.

And this was aimed at just one Internet provider namely Dyn. Imagine a coordinated attack that targets a larger number of Internet infrastructure providers. That could perhaps push the brakes on the entire worldwide Internet, not just slowing down the US East Coast.

The DNS seems to have become one of the many significant weak links in the entire Internet system. After all, if the servers that resolve IP addresses to domain names become unavailable, it is not possible to go to the place you want to go to. Making the Internet unusable. With features that readily enable both attack reflection (using DNS servers to send responses to the spoofed IP address i.e. the victim) and attack amplification (inflating the size of the original request packet), the DNS appears to have become an unwitting accomplice in the DDoS attack. Add botnets to source the incoming DNS requests — with the Internet of Things as a ready supplier of vulnerable devices that lend themselves to botnets — and you have the makings of a truly exponential attack, a solid one-two punch. Sounds like a war of the worlds, Internet of Things vs. the classic Internet!

It may be useful to reflect on what this could potentially mean. With the overwhelming digital push and the all-around rush to the cloud, the dependence on the Internet has been skyrocketing. Per the website statistica: In 2015, retail e-commerce sales worldwide amounted to 1.55 trillion USD (approximately 9% of 2015 US GDP of 17.8 trillion USD) and e-retail revenues are projected to grow to 3.4 trillion USD in 2019. By 2017, 60% of all U.S. retail sales will involve the Internet in some way according to Forrester Research.

So taking the Internet out of commission in some form could bring much of commerce to a screeching halt. The impact on the global economy would, of course, be stupendous. Dare to say it could perhaps even trigger the next recession, which seems to be always waiting in the wings (but that is another story). This attack would indeed be the “sin of sins” that for many would be impossible to forget or forgive.

Standard
Uncategorized

Digital march meets the WAN divide

Some of us may recall the unmistakable sounds of a modem connecting to the Internet.  Once the handshake was successfully negotiated, there was perhaps even a thrill experienced of having made it online. That was the world of the dial-up Internet connection. Exciting times back then. All of 40-50 kbit/second. Fast forward to the 25 Mbps and higher download speeds of today’s “always on” broadband Internet connections, and clearly we are in a different paradigm. Connecting to the Internet no longer seems as exciting, instead something mundane that could even be taken for granted.

Speaking of connectivity and networks, enterprise WANs have come a long way. From dial-up to T1/T3 to ATM to Frame Relay to MPLS, there has been a steady progression of new technologies. Though MPLS is said to be a service rather than represent a specific technology choice and could have ATM, Frame Relay, or even Ethernet at its core. Circuit switching to packet switching, so the transition happened. A trait of many of these WAN solutions appears to have been that they were very complex and called for specialized expertise. Also some of them even ran into the millions of dollars per month, so not cheap. The introduction of MPLS was possibly a game changer. But there don’t seem to have been very many technological advances in WAN technology over the last 15 years or so since MPLS came in.  As an aside, this appears to reflect what some have observed to be an absence in fundamental tech advances per se in the last decade or two.

In the past, all was quiet and stable on the enterprise WAN. Users were either at the head office or at branch offices. Applications resided at the corporate office data center. Most of the traffic was branch to head office through the enterprise WAN. Internet traffic would traverse the WAN to the data center (backhauled) before being handed off to the ISP then on. MPLS met this use case very well. There was perhaps no need to make significant advancements to the WAN. Everyone was “happy” with the status quo.

Then things started to change. Lots of reasons. Marked by a rising use of software, the world began to go digital. More and more, the software started to be accessed from the cloud. Public and hybrid cloud became popular choices. There was an increasingly insatiable appetite for consuming video, boosting demands on bandwidth. VDI or Virtual Desktop infrastructure installs added to the clamor for higher connectivity. Then there were the remote users who don’t work from branch offices. Not to forget the Internet of Things with its billions of devices slowly but surely coming online and hankering for even more bandwidth. No wonder that the enterprise WAN began to come under pressure.  MPLS was not geared up for such an intensely interconnected world.

Enter SD-WANs (Software Defined WANs). They promise to meet the demand by virtualizing connectivity options and offering an abstraction layer on top. So the underlying links could be either MPLS or the regular Internet. In order to optimize network reliability and availability, the software would “magically” send traffic through the appropriate link automatically, behind the scenes. There is even talk that SD-WAN could replace MPLS altogether and route traffic entirely through the Internet while meeting QoS and SLA guarantees. Though many users would probably balk at doing away with MPLS. After all who wants to take on too much risk? Therefore, the future seems to lie in a hybrid WAN architecture, part MPLS part Internet.

SD WANs therefore certainly look good from the enterprise standpoint, essentially more bandwidth and performance for significantly less outlays. A no brainer, one would say. But how about examining this scenario from the ISP end? Service providers are seeing traffic on their backbone networks go up tremendously. However much of this traffic increase is low revenue or no revenue. The investments needed to make improvements to the backbone network (some of which are based on circuit switching based equipment circa 1990s) are not matched by a corresponding rise in revenues from the investment.

Consequently, there appear to be two possibilities. Either the infrastructure investments are not made in which case performance could start to slide. Or ISPs could start to raise prices for regular broadband Internet access (assuming there are no regulatory constraints). Anyways you look at it, the model seems to start to break down.

Perhaps SD-WAN is a temporary solution, a placeholder if you will. More fundamental innovations are probably required in WAN technology to meet the surging demand for bandwidth and service levels. Clearly a new equilibrium has to be found at the intersection of demand for bandwidth from enterprises (and consumers) and the supply of bandwidth from service providers. Assumptions of unlimited broadband and unlimited cloud may not be tenable. An all-digital world may have to wait for the network to catch up, the WAN divide may yet slow the digital march.

Standard
Uncategorized

Data storage: Sink or swim

The weather is changing. So is data storage. What is the connection? Weather inspired tech industry metaphors, for one. Starting with the cloud itself! And some more spawned by the accelerated pace of data growth. Consider “deluge” or “flood” of big data. How about being “inundated” by data?

Of course the sheer scale of data growth is lending itself to the dramatization. Here are some oft cited stats. 90% of the world’s data has been generated over the last two years is a quote from 2013. Every day, we create 2.5 quintillion bytes of data is another one. An aside: it appears that Americans and British have their own definitions of a quintillion — 10^18 in the US and 10^30 in Great Britain (per Dictionary.com). Unstructured data accounts for most of this data growth:  An analyst firm graph that depicted storage usage over time showed 70 exabytes of the 2014 projection of 80EB of storage coming from unstructured data (an exabyte (EB) is 10^ 18, which seems to match the US definition of a quintillion).

With all this action happening on the data growth front, something’s gotta give when it comes to storing it. That something includes the legacy SAN and NAS solutions based on block and file storage that are slowly but surely giving way to newer storage technologies. At this time it may be instructive to take a quick detour into the history of storage: DAS, SAN and NAS. In the beginning, there was DAS directly attaching drives to the server. Got to attach the storage somewhere after all. Then the storage moved away from the servers onto the network. NAS was the first network storage option with remote file access, arriving during the 1980s. Shifting to block storage, Fibre Channel came in during the 1990s and iSCSI in the 2000s. Accelerating Ethernet speeds have led to a decline in Fibre Channel adoption. FC folks introduced Fibre Channel Over Ethernet (FCOE) but that turned out to be somewhat of a non-starter. One of the reasons appears to have been that there were as many FCOE versions as there were vendors. RAID arrays too are falling by the wayside. Given the rate of data growth, the time to rebuild disks started to become unmanageable.

Today there are lots of terms at play in what can seem like a hyperactive, somewhat confusing market. We have hyperconverged, with tightly integrated compute and storage, and hyperscale where compute and storage scale out independently, indefinitely in a cloud-like fashion. Indeed hyperscale is what the web giants, the Googles, Amazons, and Facebooks use. To add to the confusion, a term that sounds similar is Server SAN that is not to be mixed up with the traditional SAN, this represents software-defined storage on commodity servers with directly attached storage (DAS) aggregating all the locally connected storage. Going back to the future with the humble DAS.

But hyperscale storage is where things appear to be headed. Object storage is the technology underlying hyperscale storage, which is already deployed in the public cloud. It is for storing unstructured data, which is clearly where the need is. Not suitable though for structured, transactional data, which is the realm of block storage. Hence block storage will likely not go away anytime soon. On the other hand, legacy NAS is quite another thing; Object storage is chipping away at its share since it does a much better job with the millions and billions of files.

Toward reining in the runaway growth in data, object storage seems just what is needed. However as in most things in life, there is a catch. For getting data in and out, object storage uses a RESTful API. Yet there does not appear to be a widely accepted industry standard specification for this API. As the 800 pound gorilla in the room, Amazon gets to position the S3 RESTful API as the defacto standard. Cloud Data Management Interface (CDMI) is the actual industry standard created by the SNIA. This should have been the standard that everyone aligned with, but it seems to have few takers. OpenStack Swift appears to have had more luck with its RESTful API. Therefore there seem to be at least three different standards and vendors get to pick and choose. The question is what happens when Amazon decides to change the S3 standard, which they are perfectly entitled to do given that the standard belongs to them. Presumably there will be a scramble among vendors to adapt and comply. Amazon literally seems to have the industry on a leash. Since they were pretty much the first cloud based infrastructure services provider, they could say with some justification they invented the space.

Perhaps users need to step up and use the power of the purse to straighten things out.  Unlikely we can solve this via regulatory fiat! When the data torrents come rushing in and the time comes to start swimmin’, hopefully Amazon is not the only game in town to stop storage systems from sinking like a stone.

Standard