Internet, Operating Systems, Opinion, Software Programs, Technology Security, VR World

How the FREAK SSL Flaw Could Have Been Prevented

Since cryptographers from the IMDEA, IRIA and Microsoft Research found a serious vulnerability in the SSL/TLS security standards that are used to keep passwords and other sensitive information safe in modern browsers, security researchers have been scrambling to uncover the range of everything that has been affected as a result.

Since then, it has been discovered that Microsoft’s (NASDAQ: MSFT) server software was vulnerable, along with a host of sensitive websites, including those of Facebook (NASDAQ: FB), American Express (NYSE: AXP), the NSA, the White House, and others. Additionally, many and most popular web browsers were vulnerable to the exploit: the extent was, on the whole, very impressive.

Parties using vulnerable computers could quickly find themselves on the receiving end of man-in-the-middle attacks, that could be used to steal payment information, passwords, and other extremely sensitive data. Webmasters fared even worse, as the vulnerability could be used to inject malicious code onto server and web buttons.

Companies were quick to roll out patches and fixes for the attack, but this whole mess could have been mitigated, and definitely should not have happened in the first place.

Who is responsible for the FREAK SSL exploit?

Who is to blame for a flaw in the most commonly used security protocol in the world, that managed to affect more than one-third of websites offering SSL?

According to cryptographer at John Hopkins university Matthew Green, the flaw was built into SSL from the very start. “The SSL protocol itself was deliberately designed to be broken,” Green wrote on his blog.

Back in the 1990s, when (it is fair to say) computers were significantly slower than they are today, and the World Wide Web was still in its infancy, cryptography was not very strong by modern standards. After Netscape revealed it’s new SSL technology, the U.S government was quick to regulate the standard. U.S versions of the browser came with 1024 bit public keys, but these keys could not be exported to other countries. The International edition was significantly weaker, with 512 bit public keys.

Since the government made this rule in the interests of being able to break into other nations’ SSL, the standard was quite literally designed to be broken.

Since the 1990s, of course, computers have moved on, and so have politics. The U.S has relaxed its laws about international encryption, and in 2013, a broad push was made to introduce 2048-bit SSL encryption, which is now the standard across the Internet.

So why, fifteen years later, are we being haunted by SSL’s poor security?

Because of the varying degrees of encryption offered by servers and browsers, ‘cipher suites’ were used to negotiate the strongest available encryption standards between a client and a host. While this is barely used nowadays, the option still exists in the clockwork behind operating systems and browsers. The essence of the FREAK attack is, therefore, very simple: interrupt a vulnerable client, and downgrade its encryption from standard RSA to ‘export RSA’.

The resultant encryption is so weak by today’s standards that it can be cracked in a manner of hours using Amazon Web Services.

FREAK never needed to be an issue, and it’s the classic result of companies failing to keep up with rapidly deprecating technology standards.

Fixing the FREAK SSL vunerability

Here are three ways to prevent another FREAK attack from occurring:

  1. The government should not try to regulate Internet security standards. The government should be involved in security, and it should certainly regulate the security of its own systems: but laws should not be in place that will hamper the development of security technologies in the private sector. Criminals will continue innovating, and computers will only get faster. The government cannot shut down the Internet, or control its rate of progress, and it shouldn’t try. When it does, bad things happen.
  2. Software and web developers – especially larger ones, like Microsoft and Facebook – must actively curate its software for deprecated standards, and disable it. It can be a pain, and a hassle to keep changing systems when old ones are in place that worked in the past. But a little pain in the present could prevent massive disasters down the road.
  3. Consumers must be willing to let go of old, familiar technologies and upgrade to ones that are newer and safer on a consistent basis. Individuals – and especially companies – who insist on keeping software solutions that are deprecated in terms of decades must dedicate the time and money to keep up with the changing environment of threats and dangers by upgrading. At the very least, if they must keep the old solutions, they must seek ways to actively improve and safeguard them from potential exploits (similar to what the active Windows XP community is doing now).

This isn’t a novelty, and it isn’t rocket science. This is common sense, and it’s what people should be doing in the first place.