By Matt Williams
In March, WikiLeaks dumped 8,761 Central Intelligence Agency (CIA) documents collectively known as “Vault 7.” These documents contained information about what was essentially the government agency’s armory of cyber threats. They included malware, viruses and Trojans used for espionage purposes. More importantly, they had information about zero day vulnerabilities the CIA had been using to hack computers, tablets, smartphones and other devices for intelligence gathering purposes.
All of that armory was made available to hackers in one fell swoop that Wired called “a one-stop guide to zero day exploits.”
What is a Zero Day Threat?
A zero day threat is a vulnerability that is known about for less than a day. Here are the scenarios in which zero day threats play out:
- In many cases, these threats are first identified by penetration testers and white hats -- the good guys -- which gives them time to issue emergency patches.
- In other cases, such as the CIA example, they’re leaked, which puts the good guys and the bad guys on even footing.
- Then, there are occasions in which hackers -- the bad guys -- find the vulnerability first. They can thereby exploit the flaw in code for nefarious purposes.
In the past few years, the prevalence of these zero day threats has spiked.
In 2012, 14 zero day vulnerabilities were discovered. This number jumped to 23 in 2013, and then inched up to 24 in 2014. But in 2015 – the most recent year for data – the number jumped to 54, which is the equivalent of a 125 percent year-over-year increase, according to Ars Technica.
How Can IT Departments Protect Against Zero Day Threats?
When it comes to preventing zero day threats and new, signatureless, or mutated malware from executing, the most effective method is application whitelisting.
Consider, for instance, that Web browsers are some of the most prolific sources of zero day exploits. A non-suspecting user may visit a rogue website, at which point malicious code on that site can exploit vulnerabilities in a Web browser. From here, it’s much easier for malware to execute on a system, seemingly without the user having taken any noticeable action.
This is why active, layered protection with application control is so crucial. In addition to a firewall, which is useful for blocking known threats, a layered approach utilizes real-time scanning on the Internet and on individual machines to identify suspicious activity. This builds another key layer of defense, making infiltration twice as difficult to achieve.
Application control takes this a step further by creating a repository of allowed executables. Rather than blacklisting known malicious software (technically, your firewall should already do this), an application whitelist prevents any executable program (known or unknown) that does not have explicit administrative authorization from launching. All program executions on computers and servers are hereby monitored in real-time and, ideally, in conjunction with an active protection tool that can spot unusual or malicious activity, even in programs that are otherwise trustworthy.
As a result, malware that has previously undiscovered or undocumented signatures cannot run. Likewise, even if a zero day vulnerability or advanced persistent threat somehow enables the injection of malware into the system, it won’t actually be able to launch. The situation is effectively diffused.
To stop sophisticated threats that cut through perimeter defenses like a hot knife through butter before they ever cause harm to an organization, IT administrators require a simplified process to make all of this happen and the ability to customize privileges and application access by user.
Local government IT departments need:
- Granular control: Refine and organize application control through publisher-based approvals, policy-based control, and protection at the local machine level.
- Flexibility: The freedom to create tailored policies for different users and groups for their unique computer usage requirements.
- Centralized management: Deployment and configuration must be possible via a single web-based or on-premises console.
A tangential benefit of these capabilities is that organizations can make sure computers, servers and bandwidth are used only for their intended purposes and not as vessels for malicious activity.