[ see also: how and why we implemented local sinkholed killswitch servers ]
In the past hours a new ransomware called WannaCry (or WCry or WannaCrypt0) spread very fast on Internet and targeted a lot of public and private organizations. The ransomware make use of public exploits related to the last Shadow Brokers leak, in particular MS17-010 vulnerability that was fixed by Microsoft on March 14 (2 months ago). You can read very good tech posts here, here, here and here and I suggest you also to follow on twitter Hacker Fantastic and Malware Tech.
Here I try to summarize my approach to the news, mainly highlighting what we did in my company in the past months and how we monitored WCry from our SOC (Security Operation Center).
There was (and there is also now) a lot of hysteria, but for people like me that work in a SOC this is not an acceptable mood; you need to relax, really understand what’s happening and verify that what you did before is enough and, if not, apply emergency countermeasures.
First of all. WCry ransomware uses a known (and quietly fixed by MS) vulnerability that was higly publicized after the Shadow Brokers leak. Everyone on Security field knew at the moment (April 14) that there was a public exploit available so this Security fix needed the higher priority at the time.
Our SOC adviced the same day (April 14) our IT staff (we do not do operations, we firmly believe in segregations of duties principle) asking for the status of the specific patch application (we push Security updates regularly on a monthly basis by default), and to force immediately the fix to the system not patched at the moment. With my colleagues we monitored the status of the updates for the following weeks.
So when WCry and its exploitation technique broke the news I was well aware of how potentially extended the surface attack was in my company.
Then I collected the IoC coming from various sources and put them into our SOC near real-time IoC monitoring engine, checking also for the access to the compromised IPs and domains in the previous 24 hours (just to be sure that no one contacted the IoC provided).
Then I sent a notification to the IT operation department to block IPs and domains that I collected from various sources that where working on the same field, also from other italian big companies (tks a lot for the info sharing guys). This task took some time, but in the meantime I was monitoring from my SOC the network and proxy logs. No one can escape 🙂
In the next hours I was reading a lot of news (mainly on twitter) and got important news from some friend; as an example I knew that McAfee provided an emergency DAT file with updated signatures and said to my colleagues to push this DAT in our EPO console asap.
During the problem our SOC was in charge of Security coordination and real-time monitoring, a very challenging and tricky task.
After things calmed down I implemented some custom rule to detect potential lateral movements based on the ransomware behavior.
So at the end I think that the lesson learned today is that we have to blame NSA because they do not disclosed earlier the vulnerability but we have to blame also people (lot of people) that understimated the impact of MS17-010; today they are responsible for their users and for the impact of this ransomware wave.
And of course it’s important to understand that liability is on what we do before and not after, always keeping in mind that we are not invulnerable.
One thought on “The WannaCry journey from a SOC point of view”