
Despite immeasurable spending in this space, breaches continue. They are more common (though perhaps only more widely discussed), and grow more severe in scope [PDF Link]. It’s time to stop believing the hype and face facts: A security posture built primarily around the idea of prevention will fail. Period. It is troubling to me that we put so much trust in preventive solutions despite continuous evidence that they don’t prevent much of anything. The reason is simple: our attackers are humans. Preventive technology only works against known and well-defined threats. Until artificial intelligence becomes an affordable component of the average information security solution, a human attacker only needs to succeed one time against one victim to gain access to the typically unprotected core of their victim’s network.
Perhaps the misguided reliance on prevention technology has led to a false sense of security that contributes to the observation that 70% of breaches are discovered by parties other than the victim. Or maybe the cause is that organizations don’t read the instructions for their technology purchases, and fail to keep up with the maintenance the tech requires to stay relevant and useful with ever-changing attack surfaces and adversarial capabilities. Maybe they soon realize that the most effective prevention technology is also incredibly frustrating for the average user, resulting in workarounds or complete abandonment.

This doesn’t mean that preventive technology should be killed off entirely. It certainly has a place in a comprehensive security posture. However, it should be localized to the most critical resources, applied aggressively in a manner that impacts the fewest users possible. Let me explain.
Imagine a company whose very existence relies on the protection of intellectual property (IP) that cost millions or billions of dollars to create. Think of a research and development laboratory, pharmaceutical developer, or perhaps a defense contractor tasked to create next-generation technology. Loss of the IP from such an organization would be catastrophic to the business, shareholders, or even to national security. However, deploying a useful preventive solution such as aggressive quarantining AntiVirus or allow-by-exception web proxying for the entire user base would result in an excessive amount of work for the company to maintain.
Between fielding users’ “unblock” requests, maintaining current and accurate domain whitelists, and ensuring full systemic operation of the technology to support such measures, the team operating such a solution could quickly range into a dozen employees and a sizable technology budget requirement depending on the organization’s size. This assumes users won’t get frustrated and find ways around the technology they perceive to hinder their ability to do work.
The end result? Big budget, users will find ways to “get the job done” (aka compromise organizational integrity), and the attackers will soon gain access to the environment
In this realistic scenario, perhaps it may make sense to deploy such aggressive “preventive” solutions only in the areas of the business that are most critical. The R&D section, clinical trial unit, or defense technology division could be locked down with strong technology that affects only a small subset of employees. Policy dictates that systems on the more controlled part of the environment have different allowable uses, avoiding the “Why can’t I get to my Gmail?” variety of problems. All in all, the aggressive “prevention” technology is limited to a small group of the overall user base, focused on the most critical crown jewels in the environment, minimizing cost, complexity, and user frustration.

Endpoint data has long been out of reach because of scalability limitations. It simply wasn’t feasible to collect and examine tens of millions of data points consisting of module loads, network socket activity, registry and filesystem modifications, or other common activities that occur all the time on each endpoint. However, the advent of proper endpoint technology from Red Canary partner Carbon Black means that proactive, long-term visibility at scale is a reality. Red Canary makes those endpoint data collections even more valuable by continuously monitoring for conditions of exploitation, enriching with the most relevant threat intelligence sources, then validating each event with human review to eliminate false positives – within hours of occurrence instead of days or months. When an event requires a full incident response, the client organization has full access to the Carbon Black collection. In turn, the immediate availability of this rich data set drives the timeline and cost of IR down.
So back to the idea of “prevention” being a fallacy for information security, I contend that our reliance on such dreams has not made any significant impact on the number or severity of data breaches. To the contrary, it has arguably given a false sense of security that allowed more severe breaches while our collective heads have been in the sand. Such preventive technology should not be abandoned entirely, but localized to protect the most important information while impacting the fewest number of people possible. Then, deploy a passive data collection regimen broadly across the environment. This will enable proactive, fast, and decisive detection while driving down the costs of incident response by minimizing the time required to conduct those critical activities.