So you’ve found a runtime threat detection or endpoint detection and response (EDR) tool for Linux! Now what? Testing and validating your solution or comparing solutions to each other can be difficult, made even harder by the confounding number of Linux distributions and versions floating around.
Before you start testing, you first need to understand and document your goals and success metrics. Are you after performance and stability, security outcomes, or both? Or are features like response most important to you? Once you have your criteria documented, you can set up your lab for testing. Your testing lab should contain a representative sample of your infrastructure, including the distributions, versions, and applications you regularly run.
With goals and a lab ready, you can start testing.
Distribution and version support
Linux comes in all types of distributions and versions. Each version comes with its own features and functions that traditionally make it hard to find one Linux runtime threat detection tool that supports all of the infrastructure you may have running. A core initial test you should run with your potential runtime threat detection tools is to make sure that they support everything you are currently running or plan to run, so that you always have coverage.
An example is a classic mix of new Linux distributions and one or two old versions supporting a critical application. A mix of 95 percent Ubuntu Long Term Support (LTS) and 5 percent CentOS 6.10 (past end of life) is not at all uncommon. Most runtime threat detection tools will support the majority of that infrastructure, the Ubuntu 21.04 servers, and offer no support for the rest. Any gaps in coverage put your entire infrastructure at risk, so not having coverage on those CentOS machines can be just as risky as not using any runtime threat detection tool at all.
Another example is containers and Kubernetes. Hot new technologies do not enable you to skip runtime threat detection for your containerized deployments. The adoption curve for container support with security vendors tends to lag behind the adoption curve by businesses, leaving some significant security and visibility gaps. Runtime threat detection tools should also generically provide visibility and coverage in containerized environments or within management platforms like Kubernetes, so that you avoid gaps that can lead to compromise.
Hot new technologies do not enable you to skip runtime threat detection for your containerized deployments.
The test
This test should be relatively straightforward. Before deploying or testing a runtime threat detection tool in Linux, take an audit internally of what you’re running, what version it is, and any other key details like kernel versions or management programs like SELinux.
Next, compare those to vendor compatibility lists. If one or more of your versions are not on the list, you may have a problem fully securing your infrastructure. If the runtime threat detection tool ultimately cannot or will not support something you are currently running, it may not be the right fit. Be assertive about this test—runtime threat detection tools for Linux in particular should adapt to your infrastructure, and you shouldn’t have to change how you build and deploy software to adapt to a new security tool.
A note on sensors: many Linux runtime threat detection tools use Linux features that can only be used or subscribed to by one consumer. That means your test environment may need a unique server for each sensor you plan to test or that you need to fully uninstall one sensor before testing another.
Performance and stability
Security software takes resources. By their very nature, the runtime threat detection tools you deploy in production will need CPU and memory to do their job. Anyone promising “0 performance hit” or even a sub 5 or 10 percent hit to resources should be questioned. There is no magic benchmark value for any security program’s CPU or memory utilization; instead, you have to ask yourself what the right tradeoff is between security outcomes and performance at your company. How much CPU and memory a runtime threat detection tool needs depends entirely on how it’s built, what job it’s doing, what system it runs on, and, maybe most importantly, your internal CPU and memory limits.
In general, security sensors that run as kernel modules will be less configurable and stable than those that run in user-mode. This is because failures in runtime threat detection tools written with kernel modules can lead to service interruptions, an increased attack surface, and an increase in configuration complexity. Kernel modules allow runtime threat detection tools more access to system data that can aid with detection, but often do so at a cost to your performance and stability.
Runtime threat detection tools running in user-mode typically are more configurable, easier to deploy, and reduce the risk of service disruption. Again, the right approach for your organization depends on your risk profile, deployments, and internal standards. Come up with a standard that matches both your budget and your expected resource allocation for runtime threat detection tools. Opt for runtime threat detection tools that allow you to configure resource utilization and that are able to explain how changes in resource utilization affect security outcomes.
There is no magic CPU benchmark for a security program—you have to ask yourself what the right tradeoff is between security outcomes and performance at your company.
The test
Testing for performance and stability usually takes the form of a lab or test deployment. Set up servers that look and feel like your production environment, with the same kinds of resource limitations and load balancing. Drop the runtime threat detection tool on the machine and run it “vanilla” (i.e., with no additional configurations). Next, simulate load on the machine up to your average or highwater mark or let the runtime threat detection tool soak on the machine for several hours or days.
Once the test is complete, you should have a good sense of the actual real-world CPU and memory impact you can expect from the runtime threat detection tool. Tools should also report this data to you in their platform or in a report so that you can easily see the resources used over time.
Every extra bit of CPU and memory you use costs money or uses up resources you may need. Understanding how much your runtime threat detection tool will require—and how much it will cost you—before you deploy will save you shock and frustration later.
Security alerts
The meat of Linux runtime threat detection tools is their ability to actually detect a wide range of threats and malicious behaviors at runtime. The nature of Linux threats is different than those that target traditional endpoints like laptops and desktops, and having the ability to consistently identify threats like rootkits, memory manipulation, and other patterns of adversarial behavior is critically important if you want to have confidence in your ability to detect threats at runtime. Threat detection tools in this space are designed to help you combat everything from the known known to the unknown unknowns. They should not only provide comprehensive coverage of what you know may be impacting your systems, but also what you don’t know may be lurking, and constantly update to reflect the latest intelligence.
Focus on the most pervasive and highest frequency threats during an evaluation. While the desire to test for every possible threat makes sense, you actually want to make sure that the most common tactics and techniques have comprehensive coverage before jumping to relative obscurities like zero days or sudo vulnerabilities. Tests should seek to simulate both signatures and behaviors rather than merely looking for malicious files or processes.
With that in mind, there are a few open source, free programs for testing Linux runtime threat detection. These are general recommendations: the nuances of what specifically to test and how depends on your goals, and you can expect vastly different results between tests.
Security tests should seek to simulate both signatures and behaviors rather than merely looking for malicious files or processes.
The test
Running tests designed to mimic adversarial attacks comes with inherent risks. Know what tests you’re going to run before you run them, make sure you have permission to run tests, and warn internal security teams before running any command. Avoid running tests on production infrastructure. If you don’t yet feel comfortable with day-to-day administration of your Linux infrastructure, do not start testing!
First up is Atomic Red Team, a framework specifically designed to help you understand how your existing security tools and infrastructure hardening hold up against MITRE ATT&CK techniques and tactics. Atomic Red Team maintains a number of Linux-specific tests that can be run on your test machine to see if your logging, your security tools, or even your SOC sees potentially malicious behavior. One caveat of using Atomic Red Team alone for Linux testing is that many tests look like ordinary administrative behavior and may not generate a detection or alert your SOC.
To actually simulate patterns of potentially malicious behavior in Linux, the open source Chain Reactor tool comes into play. Chain Reactor lets you chain together discrete Atomic Red Team or MITRE ATT&CK tests, creating a pattern of behavior that looks like an adversary exploiting your machine. You can sequence together actions like starting a process, making a suspicious network connection, then attempting to dump /etc/shadow
, for example, to mimic more natural malicious behavior. Chain Reactor helps upgrade your threat detection testing to real-world scenarios, and can help you validate that any new runtime threat detection tools catch patterns of behavior, not just signatures.
More advanced testing can also be done depending on your comfort level. Do not run these unless you know what you are doing!
- Well-known programs like Meterpreter can help you simulate a number of adversarial behaviors, including persistent backdoors and rootkits
- Open source rootkits like Reptile can also be used to see how effectively and how quickly rootkit behavior is caught
- Other programs like Exploit Primitive Playground can help you simulate process memory manipulation, so that you can test coverage against remote code execution
Again, these are advanced programs, and very much look and feel like real malicious behavior. Do not run these unless you have experience and comfort with red teaming or penetration testing.
Finally, let’s talk about rule tuning. Certain runtime threat detection tools prioritize the ability to add new custom rules or tune existing rules to catch specific kinds of behavior. If the solution you’re evaluating offers rule tuning, first ask yourself why you need it. Tuning rules effectively, and even writing custom queries, means strong in-house experience with Linux threats. If rule tuning is a vital requirement for your team, test these capabilities as well, paying particular attention to how complex rules are to write, how much time you have to spend writing them, and any performance impact of a custom query. Usually, these queries run directly on the machine, and, as such, use resources to deliver results. A bad query can absolutely slow down or crash a server if the right protections are not in place.
Automation and response
Linux runtime threat detection tools should feature automations to notify you about alerts in your tool of choice. You should expect fast, detailed notifications to the programs you use to manage your security alerts and response.
The test
First, set up your automations to notify one or more notification platforms you use to manage incidents today. This can include communication software like Slack or Teams, monitoring software such as PagerDuty, or traditional phone and email. The specific programs to test depend on your existing incident response process.
Once configured, kick off a test detection in the runtime threat detection tool you are evaluating. The runtime threat detection tool should rapidly provide a notification to your selected notification and monitoring tools with enough information to understand the threat. If your runtime threat detection tool offers the ability to cascade notifications to multiple destinations or require approval before communications are sent out, you should also test that these functions work as expected.
Some runtime threat detection tools offer automated response and remediation, with capabilities such as isolating a workload or moving it to another security group. If present, you should also validate that these automated functions work as expected. Since every server and environment is different, these features may lead to unexpected behavior, bring down critical applications, or they might not work at all with your configuration. Generally, when managing cloud or production environments, you should be cautious about what response and remediation features do and how they impact your stability and uptime. Consider manual response to a threat if you have infrastructure that cannot be automatically isolated or reimaged without significant impact.
Be cautious about what response and remediation features do and how they impact your stability and uptime.
Human time
The last test you should consider is the most often overlooked: the human effort that goes into managing your runtime threat detection tool. Every program you add requires that someone pay attention to it. From testing and procurement across day-to-day usage, runtime threat detection tools add another job for your team to do.
The last test
As part of your testing framework, consider how much time someone will have to invest to manage the solution you are evaluating. Ask questions such as “does this fit into our existing security workflow” or “does this reduce or add complexity to someone’s job” to figure out how to align the new solution to your existing team. Some runtime threat detection tools may require new hires, but can offer a significant improvement to your security posture. Others may add complexity with limited return. Know before you buy—discovering that you need a new hire or are spending significant time managing a solution is not a good security outcome.