Skip Navigation
Get a Demo
 
 
 
 
 
 
 
 

Trend

Artificial intelligence (AI)

An important question looms in the infosec conversation about AI: Will generative AI tools better benefit defenders or adversaries?

Pairs With This Song
 

In 2023 we all witnessed a new era in the use of generative AI (GenAI) to aid in solving or automating many of the rote tasks we take on as defenders. Technologies like ChatGPT, Gemini, and GitHub Copilot showed how GenAI—backed by powerful foundational models like GPT-4—can reduce the cognitive load and stresses that come along with the day-to-day operational cadence on a security team. As 2024 progresses, we will continue to see more tailored cybersecurity solutions helping defenders make more accurate and informed decisions. With this hype and promise, we caution users of GenAI technologies to be thoughtful in their use and not to trust its output implicitly without the proper data and context to augment the foundational models you’re using. Remember that GenAI lacks common sense and decision-making capabilities. It’s up to the defender to make the final calls. Red Canary is bullish on GenAI as it stands to be an accelerator and deflationary technology in cybersecurity.

Remember that GenAI lacks common sense and decision-making capabilities. It’s up to the defender to make the final calls.

Adversaries also have their eyes on GenAI to automate their own tasking, helping to manage infrastructure, expedite phishing lure generation, impersonate employees via deepfakes, and by leveraging open source information and tools to create highly tailored operational plans for threats like ransomware. As with all new technologies, individuals with malicious intent will eventually adopt them, and it may or may not surprise you. It’s important to differentiate between click bait headlines and truly groundbreaking changes in adversary tactics and techniques that are enabled by GenAI. Importantly, we don’t have smoking gun evidence of adversaries using AI tools in their attack campaigns at this time, but only a fool would bet against it.

In the following sections, we’ll explore how adversaries may be using AI to make their lives easier and then describe the many benefits of AI that we’re already seeing across Red Canary and the broader infosec industry.

Is AI better for good guys or bad guys?

Spoiler alert: We think it’s better for the good guys by a long shot, and we’ll explain why in the coming paragraphs. However, let’s start with the bad.

AI for adversaries

We’ve written about the implications for AI for adversaries, particularly how it will affect the malware ecosystem, on the Red Canary blog. So we’ll start there.

AI and malware

Some of the potential benefits for malware developers include leveraging AI to:

  • make subtle code changes to evade signature-based detection in ways that are fundamentally similar to functionality already provided by crypters
  • modify the functionality of a piece of software by automating the development process for adding a new feature, although we should note that AI-produced code is often unreliable without a great deal of tweaking
  • translate malware code from one language to another
  • assist with defense evasion techniques by having AI act as a defender in your TTP development pipelines

Among these, the third point is probably the most useful for adversaries, since it may allow them to readily expand malware to make it cross-platform or to adjust their tools on the fly, depending on the capabilities of their target system. The likelihood of AI magically creating net new malware capabilities seems low, largely because malware capabilities are entirely dependent on already well-understood operating system capabilities upon which AI has no impact. Ultimately, it seems like AI has the potential to expedite capabilities that already exist.

AI for phishing prompts

Perhaps the most obvious adversary application for AI is phishing prompt generation. There’s been plenty of hand-wringing about the lousy quality of AI writing, but those critiques are based on comparisons to relatively high-quality human writing. In the phishing space, the comparison is different. It’s between non-native speakers using their limited foreign language skills (or online translation tools) and the writing quality of an LLM chatbot. The latter is objectively better and less obvious than the former. However, poorly written phishing prompts have worked for decades and continue to work today. Further, sophisticated adversaries have always been able to generate quality phishing messages when they need to. It’s hard to imagine AI tools fundamentally revolutionizing phishing, which has long been one of the primary means for adversaries to gain initial access.

AI for data analysis and discovery

While we’ve been critical of AI’s ability to write code and prose, there’s no such criticism to be made of its ability to analyze large amounts of data. This is precisely where AI shines, and where it probably provides the greatest boon for adversaries. It’s hard to prescribe all the many applications for AI, but it’s easy to imagine adversaries exfiltrating large troves of data using AI to analyze it in search of sensitive information, credentials, or other data that is inherently valuable or valuable for the purpose of moving deeper into a victim environment.

AI for APTs

The specter of sophisticated, state-sponsored adversaries with deep pockets looms large over this industry, and it’s easy to imagine a thousand thought leaders furiously blogging about AI’s accelerant effect on so-called advanced persistent threats (APT). The reality though is that state-level adversaries have likely had their hands on better AI tools than their counterparts in private industry for the better part of a decade. The same has always been true for exploit capabilities. Just look at the havoc wrought by ETERNALBLUE, an exploit that was likely many years old when it slipped into the public space, spread all over the world in a matter of hours, and caused billions of dollars worth of damage.

It’s probably true that sophisticated state-backed adversaries are leveraging GenAI in sophisticated and hard-to-predict ways, but these are fringe threats that most organizations will never encounter. Among those organizations that do need to worry about truly state-of-the-art threats, it’s prohibitively difficult to develop reliable security controls that can counteract exploit technologies developed by military or intelligence agencies with multi-billion dollar budgets. That was true before AI. It’s true now. And it will remain true as long as computers exist.

AI for defenders

Enough about bad guys, let’s talk about the many ways that AI is already making us more secure and making security professionals better at their jobs.

GenAI enables defenders to have a general problem-solving tool at their fingertips. You no longer need to sit down and develop specialized analysis scripts during incident investigation or security operations projects. You can describe your tasks and objectives in plain language, unlocking lower-level tasking that is typically done by more senior team members with more in-depth coding skills or job experience. The application of GenAI for defenders spans tasks like project planning, team tasking and task management, data analysis baselining, malware analysis, and architecture planning. There has never been a more promising general purpose tool to help defenders level up and keep up with the evolving threat and technology landscape.

AI for data analysis

In the cybersecurity world, we often face an overwhelming sea of data. As many experts have pointed out, security is deeply entwined with data management. Security teams frequently find themselves buried under more information then they can realistically process. The challenge isn’t always about finding the data, it’s about focusing on what matters. The crucial insights are there, hidden in plain sight amidst the noise of countless alerts and logs.

Imagine having a super-smart assistant who can not only read through mountains of data but also highlight what’s important. LLMs are game changers in how to handle this data deluge. For example, you could ask an LLM to sift through logs varying from network sensors to cloud activity logs and pinpoint potential security threats. It’s like having a detective who can wade through the clutter to find the clues that matter.

But it can’t be that simple right? Yes! You feed a model like GPT-4 raw data such as Microsoft Office Universal Audit Logs (UAL) and with instructions as simple as a conversation, the AI analyzes this data looking for patterns and anomalies. It can summarize its findings, suggest next steps, and even create visual representations like tables and graphs to make the trends clear.

Ready to take the LLMs output a step further? Ask the AI to generate code in Python to automate your analysis, making your operations more efficient and cost effective.

This process, all powered by natural language, is transforming data analysis in cybersecurity. As we move through 2024, expect to see more and more tools that automate these tasks, making defensive security smarter and more proactive than ever.

AI for summarization and drafting

We may be veering too specifically into the parts of infosec that require clear and consistent communication (e.g., security analysis, intelligence, threat detection, incident response, etc.), but AI tools are very proficient at taking disparate information from numerous sources and synthesizing it down into a human-readable, readily consumable narrative.

Say you’re a SOC analyst, for example, and you’re reviewing a long list of related but distinct alerts. You know they tell a compelling and important story, but unpacking the origin and meaning of each alert and then chaining them together into a meaningful story of what happened is tedious and time-consuming. Not to mention that’s time that you could otherwise spend investigating surrounding activity to make sure you’ve got a handle on the entire scope of the event or incident as the case may be.

A well-trained AI can immediately connect all these dots for you. It may not be perfect, but it will be a plenty-good-enough starting point for you to get a clear picture of what happened and what to do next, potentially saving crucial minutes or hours of triage (or at least saving you from tyranny of unnecessary work).

Finally, when it comes time to explain what happened, whether it’s for a briefing, documentation, or something else, your LLM chatbot friend can quickly write up a serviceable first draft that you’ll only have to revise.

AI for threat analysis

In the data analysis section above, we emphasized how cybersecurity defenses heavily rely on analyzing vast amounts of data. At Red Canary, our daily processing of billions of security signals underscores the challenge of identifying threats amidst the mountains of data we collect. Even with our investments in automation, we’re now turning to AI to further enhance our products and security outcomes for our customers.

Historically, cybersecurity required specialists to navigate numerous tools in order to sniff our threats and take timely action. GenAI is set to change this, offering broad support across various tasks and making high-level expertise more accessible to all defenders.

GenAI’s introduction to our threat analysis processes marks a shift towards automating routine yet critical tasks. This includes streamlining investigations, assisting in reverse engineering, crafting detection rules, refining threat hunting queries, and even advising on security policy improvements. We see GenAI boosting defender efficiency and effectiveness, taking on roles within a SOC like an investigation ally or a strategic consultant on policy matters.

We will see this automation-focused application of GenAI continue to mature in 2024. It will manifest itself into the development of AI agents specifically designed to aid defenders, capable of performing tasks with high levels of accuracy. These agent-based architectures will spawn a new era in cybersecurity defense where current practitioners not only become more efficient but roles across the industry become more accessible.

AI for training and learning

The impact of AI in cybersecurity is going to extend beyond traditional attacker vs. defender mindsets. With the advent of LLMs, knowledge of cybersecurity has become widely accessible, effectively putting a personal tutor at everyone’s fingertips. This marks a pivotal moment in education where learning about information security topics or preparing for your dream job is limited only by your curiosity and imagination.

We’re particularly excited about how GenAI is making complex topics more digestible, catering to individual learning styles and preferences. You’ll have a personal guide through the intricacies of cybersecurity tailored to the way you learn best.

On this note, we invite you to take an Atomic Red Team test and experiment with using ChatGPT or Gemini as your personal tutor. Instruct the AI to be your cybersecurity tutor, let it know the ways to like to consume information, and paste in your favorite YAML file. We’re confident you’ll be impressed by what you can achieve with this AI-assisted learning you just discovered!

The verdict

As we’ve said here and elsewhere, we believe that AI is more of a net positive for defenders than it is for adversaries. The use cases we described make part of that point. However, another important factor to consider is resources. As a collective—and often within reasonably well-funded security teams—we have more money and more expertise than most adversaries. Whether you work for a security vendor or on an organization’s internal security team, you have money to spend on infrastructure and expertise. The security industry is awash with formally educated data scientists and other specialists who can leverage expensive and powerful tools to optimize AI in ways that simply are not available to the overwhelming vast majority of adversaries.

 
 
Back to Top