by Tom Cross, Chief Technology Officer
Organizations are completely safe from WanaCrypt0r if they are following operational best practices that were matured a decade ago, when worm outbreaks like this were more frequent. So, why did WanaCrypt0r cause so much chaos, including disabling hospital networks and automobile production facilities?
What Lessons Can We Learn From the WannaCry/WanaCrypt0r Outbreak?
At this point a great deal of ink has been spilled on the WannaCry/WanaCrypt0r ransomware worm outbreak that began on Friday, May 12th. Rather than repeating operational information that has been reported elsewhere, I want to discuss what this incident means for the long term. What went wrong here and what do we need to do about it?
It’s important to keep in mind that this situation is still unfolding. If you’re interested in operational advice, I recommend reading Microsoft’s guidance for customers. Due to the scale of these attacks, Microsoft took the highly unusual step of releasing patches on Friday for several unsupported versions of their operating system, including Windows XP, Windows 8, and Windows Server 2003.
Also noteworthy is a detailed technical write-up on the malware by Matt Suiche, as well as this post from a blog called MalwareTech. MalwareTech registered a domain from Friday’s WanaCrypt0r variant that turned out to be a killswitch, stopping the spread of the malware. Unfortunately, new variants of the malware have subsequently emerged, some of which have different kill switches, and others that have none at all. Matt Suiche has an update on some of the new variants here.
Putting WanaCrypt0r in Historical Perspective
One of the narratives that has emerged about WanaCrypt0r is that the timeline and scale of the attacks are historically unprecedented, fueled by the availability of exploit code which was stolen from the NSA. While WanaCrypt0r is certainly a significant event, this narrative is a bit overblown. WanaCrypt0r is not dissimilar from other Internet worm outbreaks that have occurred in the past, and it is the sort of event that administrators should be prepared to defend their networks against.
From a timeline perspective, network operators have had several months to prepare for this attack. Back in January of this year, US-CERT began warning people to disable the obsolete SMBv1 protocol. Patches for these vulnerabilities were released by Microsoft on March 14th.
This timeline is similar to that for the Conficker worm. The vulnerability Conficker exploited was patched on October 23rd, 2008. The Conficker worm started spreading a month later on November 20th. The Blaster worm started spreading on August 11th, 2003, exploiting a vulnerability that was disclosed on July 16th of that year.
From a scale perspective this outbreak is also comparable. As of Sunday, May 14th, MalwareTech has seen approximately 198,000 addresses hitting its WanaCrypt0r sinkhole/killswitch. Blaster infected approximately 423,000 systems over the course of a week in 2003. Conficker ultimately infected millions of hosts.
One aspect of WanaCrypt0r that is somewhat new is the Ransomware component — this aspect of the malware means that once hosts are infected, recovery is much more difficult than in the case of the historical worm outbreaks I’m comparing it to. Unless, of course, you’ve got good backups.
Why Can’t Organizations Just Patch?
Organizations are completely safe from WanaCrypt0r if they are following operational best practices that were matured a decade ago, when worm outbreaks like this were more frequent. All you need to do is run operating system software that is still supported, and make sure that all your systems get the latest patches every month. Other layers of defense also help, such as having good backups, running decent network IPS and anti-malware systems, and disabling obsolete protocols like SMBv1.
If all of this is so easy and so standard, why did WanaCrypt0r cause so much chaos, including disabling hospital networks and automobile production facilities? Professor Steven Bellovin posted a great comment about this on his blog. The short answer is that patches sometimes break production systems, particularly in environments where unusual hardware and software are being used, which is exactly what you’d find on a hospital network or inside of a factory. Therefore, organizations with this sort of equipment take longer to roll patches out.
If You Can’t Patch, You Must Segment
Fortunately, there is an answer to this quandary. If you cannot keep systems patched up to date, you need to completely segment those systems off from systems that have access to the Internet. There is no reason that legacy systems that directly manage medical equipment or automobile production need to be on the same network as systems that are used to browse the web or read email. Although the decision to segment carries some inconveniences, you really have a clear choice when it comes to threats like WanaCrypt0r — either patch, segment, or suffer.
Of course, I’m very aware of the difficulty associated with creating and maintaining good physical segmentation in computer networks. That’s why two years ago John Terrill and I cofounded a company, which was recently acquired by OPAQ. We built a technology that is specifically designed to make segmentation projects easier and less expensive, by allowing segments to be defined in software, based on business constructs such as users and applications, rather than the constraints of the underlying networking hardware. We are integrating the micro-segmentation capability into the OPAQ platform, so that OPAQ customers can segment their networks at the touch of a button.
The Attacks Could Have Been Much Worse
We live in a time when multiple nation states are developing the capability to exploit vulnerabilities in computer systems. This sort of activity is now a standard part of intelligence collection and war fighting throughout the world. Occasionally, vulnerabilities, exploit tools, and other artifacts of these activities will find their way onto the public Internet. In this case, an exploit that purportedly belonged to the NSA was leaked to the public, but only after patches were generally available.
Incidents have happened in the past with much tighter timelines. For example, the Stuxnet malware, which was reportedly used to attack uranium enrichment facilities in Iran, resulted in the public disclosure of a vulnerability in Microsoft Windows before patches were available. In that case Microsoft acted quickly, closing the hole in two weeks, but a lot of attack activity can occur in a couple of weeks, and a number of malware operators took advantage of the opportunity at that time. It is almost inevitable that we will see events like this again in the future.
If segmentation is necessary when you cannot patch, then preparing for future events where patching cannot be done quickly enough means that everyone needs better segmentation. We need to move as close as possible to a “zero trust” model of operating computer networks, so that attacks which strike a particular host cannot spread beyond the minimum areas of the network that host needs to access.
At OPAQ Networks it is our goal, both now and over the next few years, to build and integrate technologies that help our customers easily achieve computer networking that is:
The WanaCrypt0r incident underscores the importance of these four objectives, and demonstrates that for security professionals, there is still a great deal of work left to be done.