First, consider the value proposition of network and endpoint platforms to a typical organization. This notional organization has committed to a security budget, has some information of value to protect (employees’ and/or customers’ personal data, user login databases, proprietary information, credit cards, etc), but not much of an existing security architecture.
Deploying network platforms can provide some immediate value—but there are a few options in this category.
The Legacy of Intrusion Detection Systems
Many organizations start by deploying an Intrusion Detection System (IDS) or complex “threat intelligence” platforms. Many of these use proprietary signatures and data feeds which don’t yield actionable alerts. “You have a problem with <some IP address>” may be all you get from that fancy (and expensive) IDS—resulting in your human team springing into action, chasing phantom evil each time a new alert comes in.
This kind of network visibility is not usually the best standalone investment because it often increases the hours your team will spend running down the sources of low-fidelity (often low-quality) alerts. Network-based detection, while invaluable, generates questions that almost always lead to endpoints because the endpoint is the most conclusive place to look. It’s not to say an IDS is without merit, but unless you have a streamlined investigative workflow—generally the product of a robust information security program—you won’t be able to efficiently leverage the platform to improve your security.
A More Practical Approach to Collection
Broader methods of long-term network visibility, though, can provide an immediate and inexpensive boost to your investigative workflow. Perhaps the two most useful collections you could deploy within your environment are NetFlow collection and passive DNS logging. (The term “NetFlow” is generically used for any flow-based collection technology, not just what is technically defined as “NetFlow”.)
NetFlow is a statistical aggregation of network communications—essentially a database summarizing the metadata of each network communication being logged. There is no content retained—only header fields such as IP addresses, protocol, and ports, along with packet and byte counts, start and end times, etc. These records are comparatively small and can generally be retained for long periods of time—both due to the lack of any packet content.
NetFlow can be used to quickly find any evidence of communication with systems that are confirmed or believed to be malicious, find large transfers that could suggest data exfiltration events, establish rolling baselines of activity for the environment that can be used to hunt for anomalies, and more. It is fast to query and can be a great tool to quickly scope a compromise or to identify additional sources of evidence during an investigation—such as web proxy server or firewall logs. Most importantly for a developing security organization, collecting NetFlow is quite inexpensive to start capturing, either with open-source or commercial platforms.
Passive DNS also provides a huge “bang for the buck” impact to any security team. A passive DNS collection is a database or even text log file containing the queries and responses of all DNS activity for a given organization. It is often collected at the internal DNS resolver, and typically includes the IP address of the querying system, the DNS query itself (for example, “redcanary.com”) and the response (“IP address 220.127.116.11”). These data points are recorded in a structured format, and can be easily queried via a SIEM, log aggregator, or even shell utilities.
What makes continuous, on-site DNS collection so critical is that DNS records are very dynamic. Having past point-in-time visibility to this activity from your client systems is invaluable in adding context to other observations of their actions. The benefits to an IR team are numerous.
Since nearly all other network protocols use DNS before engaging in their application-layer communications, passive DNS logs available provides a single source to use in characterizing all protocols—a “one-stop shop.”
Passive DNS logs are especially helpful in enriching NetFlow, since the lack of content may leave an analyst with unconfirmed hypotheses about those communication summaries.
DNS logs can also be used to feed baselining activities—for example, in building a rolling list of the 5,000 most common second-level domains queried in your environment. These baselines can be leveraged when seeking anomalous activity that strays from the normal pattern of behavior.
Collecting passive DNS data is also inexpensive or free, with a number of open-source or commercial options suitable for the task.
One option to easily collect both NetFlow-like and Passive DNS data is the Zeek Network Security Monitoring (NSM) platform. Zeek (formerly known as Bro) is a free solution with an enterprise-scale commercial offering from the Corelight company. It observes data from a tap, port mirror, or their virtual equivalents in cloud environments, then creates logs that detail what was observed. Among these logs are the “conn.log” file, which is an analog to NetFlow, and the “dns.log” file containing an equivalent to Passive DNS data. These two files alone are an invaluable addition to the incident responder’s evidence set. However, Zeek doesn’t stop there—there are more than a dozen default log files created as needed, including protocol-level (SMTP, HTTP, RDP, etc.), asset-level (inventories of observed devices, software, active services, etc.), and more. The Zeek log files can be created in tab-separated-value or JSON formats, and can be extended with custom Zeek scripting that can address an organization’s unique requirements.
On the other hand, endpoint visibility can be equally critical, yet suffers from scale and efficacy risks. Simply installing an endpoint visibility tool (more often referred to as Endpoint Detection and Response) does not equate to an operational capability.
Purely on the topic of scalability, it’s common to have an environment consisting of thousands or even hundreds of thousands of endpoints. Pooling and analyzing data from that many collectors is no small challenge. Fortunately, the advent of (buzzword alert) big data technologies has put such tasks within reach of even a new or small security operation.
However, the question of efficacy is a bigger issue. We still see many organizations that equate “endpoint” solutions with “antivirus,” which is an antiquated and unhelpful viewpoint. While antivirus is not going away anytime soon, its demonstrated value has been diminishing for years. Today, bargain basement malware authors can use automated means to cheaply “re-spin” their evil wares dozens of times per day so antivirus evasion is trivial. When faced with the resources of a nation state-grade adversary, it’s a baseline assumption. However, we’ve finally reached a point in the security industry where meaningful endpoint solutions are within reach for the average security team.
Which visibility product?
At Red Canary, we have found Carbon Black Response to be unparalleled in the detail it collects. There are many other platforms (endpoint and network) in the space and we continually evaluate them to determine their value proposition to our threat detection workflow. This is an active market segment, and it’s exciting to see the solutions evolve so quickly.
Why we focus on the endpoint
Incident response actions become fast and continuous with a good endpoint collector, allowing IR to exist as an ongoing process rather than discrete incidents with a finite time frame. With granular endpoint data such as file creation/modification events, registry edits, network socket activity, etc, an IR team can quickly go back in time to an event or time period of interest to establish a high-resolution picture of what occurred and whether it was suspicious, malicious, or benign.
The ability for a team to quickly reach such a decision on an event of interest is becoming a critical skill for a robust information security team. Faster, more resolute detection means truly malicious actors have far less dwell time on their targets before being discovered, confirmed, and remediated. Shorter dwell time means less opportunity to cause damage before remediation—the hallmark of a successful security team.
Why not both?
Of course these two vantage points are not mutually exclusive. To the contrary, each is better when coupled with the other—a synergy all security teams should strive to create. Network-based scoping or anomaly identification can be quickly confirmed and clarified with ready access to endpoint collections. An endpoint observation that suggests a data exfiltration event can be confirmed or refuted by seeking the corresponding NetFlow records. Such solutions need not be “all-or-nothing” deployments. While comprehensive deployment across the entire environment would be ideal, both network and endpoint solutions can also be tactically deployed in network segments containing the most sensitive data or are most likely to be targeted—even during incident response activity to support an active investigation.
The goal of those building or improving a security team should be to seek the best value for the investment in each area and build a process around them that supports the organization’s objectives—not to indiscriminately prioritize investment on one type of security visibility. Each organization is different and there are no cookie cutter answers. You know your organization best and what gaps you need to fill in your security posture.