When we are woken suddenly from sleep, instinct swiftly kicks in wake-up call while we are temporarily disoriented. The WannaCry ransomware worm was one such rude awakening for many organisations that were unprepared for the outbreak and fallout.

Gut reactions ruled the day. Some organisations took the drastic step of cutting off their email and internet access because they thought – due to early confusing information – that phishing was the infection vector. Some that did so, turned out not to have been infected, but they were not prepared to take the risk.

After striking on a Friday afternoon, WannaCry gave many systems engineers a lot of sleepless nights as they tried to get environments protected, systems patched or in some cases back online after being encrypted.

WannaCry’s high profile and rapid spread were partly to blame for the panicked response, but I believe there was no excuse for being caught so badly off guard. Ransomware is one of the fastest-growing security risks. Reported infections were up 50% over the past year, according to the 2017 Verizon Data Breach Investigations Report.

For me, the WannaCry fallout showed the need for a more proactive, systemic and process-driven approach to patching. We often see security budgets invested in the latest firewalls and stringent access control lists, but other aspects get overlooked. Rather than ‘upsetting’ an important application that runs an older operating system by patching, too many organisations rely on gateway security to fend off the threats. It’s not enough.

I like to err on the side of optimism, so I would say that a lot of people have patching processes, they are just a lot looser than they should be. Any organisation with a predominantly Windows environment needs to pay attention to what each new patch in Microsoft’s monthly ‘Patch Tuesday’ release schedule means. It also needs to understand the vulnerability risks from choosing not to update. Having a staging environment for reviewing patches is a worthwhile investment.

With a properly defined patching process in place, it should be possible to take certain machines offline at scheduled times to install a critical patch. That might be hardware that isn’t considered IT as such, like a medical device or PoS system, but that runs an older version of Windows and is considered as business-critical.

I don’t buy the argument that starts with ‘can we lose that system for two or three hours while we patch’? A more appropriate question is, can you allow a criminal to decide how long that system is down – or if it doesn’t come back at all?

At its core, security has to be about more than just locking down an environment or restricting access. That means changing the mindset to being focused on securing the business and keeping it going: minimising the risk of outages, or loss of data. It means understanding the impact of being unable to access important information, and knowing how quickly you can get from a ‘down’ state back to fully operational. How many organisations had an incident response process that comfortably dealt with this type of problem?

Looking back, people overreacted to WannaCry. Good security would have prevented much of the panic: cool heads, solid processes, and an investment in security that is appropriate to your environment. That encompasses technology like next-generation firewalls and intrusion prevention systems, to processes like incident response, user awareness training, regular patching, maintenance, and backups.

Let’s not waste the opportunity WannaCry has given, and move away from managing security with hindsight.

– Karl Kearney, Solution Architect, Integrity360

Article courtesy of TechCentral.ie