First Things First: Why Do Security Geeks and Hackers Love PowerShell So Much?
For years, the information security industry has been saying that we’re not sharing enough. Now, we’re seeing a trend where the community is starting to open up. More researchers are sharing detailed information and making their tools readily available, openly sharing through communities, github, and social media. But there’s a good and bad side to this.
On the good side, sharing source code and innovations helps researchers, red teams, and security teams defend themselves against the newest techniques. It’s pretty exciting to see where this is going, both from a detection standpoint and for red teams. There are some really great tools available
On the bad side, the same tools and information are readily available to everyone. Security geeks love them, but so do the bad guys. The latest post-exploit kits provide actors with everything they need to freely move around an organization. Many of these attacks are not easily identified as they rely primarily on behaviors that easily evade most security tools. Prudent security teams should be investing in capabilities that can detect and stop the behaviors typically seen during post exploitation.
What Security Teams Want to Know About Post Exploitation: Webinar Questions & Answers
We received so many great questions in our webinar on post exploitation, we ran out of time before we could answer them all. I took some time to circle back and answer some of the most interesting questions we received.
Q: Even though post-exploit kits use legitimate tools, what about just blocking the post-exploit kit itself?
The challenge with blocking post-exploit kits is identifying the difference in an automated/programmatic fashion. These kits are mostly a set of tools and scripts built to run via a built-in shell or interpreter. You could block access to the known sources, but given that most of these are open source, it is easy to move the code source or hosting location. You could also look at preventing the behaviors, but you’d need to implement some sort of whitelist or similar solution. Developers have thought of this and many of the kits include tools and methods for bypassing known whitelist tools.
Q: On the prevention vs detection slide, you stated: “Threats will still get through—detection is your backstop.” Can you elaborate on this? What threats are you seeing get past advanced/next-gen prevention tools?
Automated prevention tools are not 100%; no tool catches everything despite what their marketing materials tell you. As threats and attacks change, you need to have visibility to verify that your prevention capabilities did their job, as well as catch anything that does get by. The best detection tools should give you visibility into activities that you may not be able to prevent or block due to need for additional context, such as user, time, host, location, etc.
Fileless malware and attacks propagated solely via built-in tools are the two most common attacks getting past prevention tools. We also see many poor IT practices that lead to compromises: incomplete coverage and loose policies that allow escalation and lateral movement.
Q: How do the detection, prevention, and threat hunting mechanisms ensure that legitimate users are not denied access (i.e., when a legitimate user decides to use a different network or isp)?
This strongly depends on the context and controls in place. Detection and threat hunting can operate independent of prevention solutions. Identifying a threat or behavior should be based on various independent aspects. Depending on your environment, network or physical location is only one indication of something suspicious happening. If you are looking at process-level data, the actual behavior and execution should be what you are alerting or detecting on.
Q: Considering the intrusions investigated by Red Canary, do you see many customizations on attacker tools or just the use of original tools such as empire, powersploit, mimikatz, pwdump and others?
It’s a mix. One thing to note is that Red Canary doesn’t focus heavily on attribution. Rather than focusing on specific attacker or tool attribution, we’re more focused on identifying behaviors associated with those tools. Bad PowerShell behavior is bad PowerShell behavior. We’ll lay that out and provide our customers with visibility on how the attacker got in the door, what they did once they got in, and quickly provide the information necessary for taking the first steps.
Q: What are some of the trends you’re seeing in Mac and post-exploit activity?
For years, people said there’s no Mac malware and Macs can’t be attacked. But as Macs become more popular and the footprint grows, so does the interest of attackers looking for ways to exploit them. I’ve personally seen a large uptick in Mac adware and unwanted toolbars that are leveraging misinformation to push onto the Mac environment. A lot of the same problems Windows admins have dealt with are cropping up more and more on Macs. They’re not as immune as they used to be. We’ll continue to see this on the Linux side as well. The more management tools we push out there, the more attacks we’ll see.
Q: Is there any open source analog to Empire for blue teams in terms of detecting behavior? I’ve heard PoshSec (Ben0xA) is an option—any other ideas?
From the open source detection capabilities on the Windows side, there are some opportunities with SysMon and Windows logging. You should turn on PowerShell logging. I’ve seen some recent articles from Microsoft TechNet Post starting to push out on PowerShell controls. As the market matures, the number of tools expand. Most of my day is spent leveraging Carbon Black data so I’m somewhat biased toward that, but you can also look toward open-source tools and capabilities. LimaCharlie< or Osquery are two open-sourced endpoint focused tools I’m keeping my eye on.
The website ThreatHunting.net was also started to walk you through Microsoft tools you can use to go find malicious behavior on an endpoint. So as a reaction to an incident, you can go run a set of activities with your analysts to gather that data and then kick it up to a Level 2 Analyst or engineer. There are a lot of companies out there starting to publish this type of information and it’s really good for the industry.
Q: How does it look for Red Canary to integrate with a customer’s internal team and what type of dwell time reduction do your customers typically see?
In terms of integration, we stand up your Carbon Black server, monitor and manage it, and help with deployment. Our Technical Account Managers are assigned to customers on Day 1; they’re security people first and foremost, and they’re the first point of contact for escalation to help with everything from understanding alerts to integrating Red Canary data into help desk systems, Slack, or SIEM. On the backend, all that data is feeding into the Red Canary SOC and our analysts are going through the endpoint data to identify the threats.
As far as dwell time reduction, our alerts are generating an execution timeline. As soon as we’ve identified it, we quickly provide the information so you can see how the threat got in the door and what happened once it got in. For example, we recently helped a regional hospital that got attacked by ransomware and was using PS Exec based network spreaders to spread the threat through the environment. We notified them right away, helped them walk through cleaning it up, and temporarily blocked PS exec to help them triage and execute response. They cleaned it up over the course of a day. Compare that to a similar attack on another hospital which had to shut down its doors, move patients out, and was down for around a week. Our continuous monitoring gives us the ability to help customers identify malicious activity as soon as something happens.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.