Skip Navigation
Get a Demo
 
 
 
 
 
 
 
 
 
Resources Blog Security operations

From corn fields to Galois fields to the field of threat hunting: meet Jeff Felling

Jeff Felling is a puzzle-solver, a threat hunter, and Red Canary’s new director of intelligence.

Brian Donohue Jeff Felling
Originally published . Last modified .

Jeff Felling is an experienced threat hunter who’s also dabbled in math (if only academically) and malware and forensic analysis. He spent nearly a decade working on a wide-variety of projects at the Department of Defense (DoD). Just prior to joining Red Canary, he spent three years helping to create a threat hunting program called ORION at Anthem, Inc.

Jeff leads a team that includes some of our most prolific blog writers, webinar panelists, and conference speakers, so it’s reasonable to assume that intel team will have a major impact on our editorial and content direction moving forward.

As such, we wanted to introduce Jeff to the community, so we sent him some questions about his past work experience, personal interests, and future goals. What follows are his lightly redacted answers to those questions.

How did you get into security?

In contrast to Todd Gaiser, I don’t have a story that begins with a bulletin board and a bike and deliberately and directly leads to living the dream. My journey has been much more serendipitous, which, I suppose, is fitting for a self-branded threat hunter. However, sometimes it’s better to be lucky than good.

I grew up on a farm and loved puzzles, games, sports, and the outdoors (in no particular order). At a young age, I realized I was good at math, and, even though it wasn’t always my favorite subject in school, I learned when I applied to college that scholarships were based on your major. Naturally, I landed in the math department. After trying physics with the physicists, programming with the computer scientists, and even courses on dinosaurs and crop and weed identification, I eventually found myself decidedly undecided but with enough credits to graduate with a math degree.

At a job fair on campus, I was intrigued by a government organization that hired math majors to solve puzzles. I liked doing the cryptoquip in the newspaper, so why not?

Several eventful years later, I found myself looking for a new job that could move my family closer to our roots. But what kind of job was I qualified for? Data scientist? Security analyst? Digital forensics? I hadn’t exactly done any of these. Luckily, I happened across a position in the relatively new field of threat hunting, which seemed to combine aspects of each. After three years at Anthem, working with an amazing team and building the ORION threat hunting program, I was presented with an opportunity to join Red Canary.

I was already following this blog and Atomic Red Team, but the more I learned about the company culture, mission, and philosophy, the more I realized how special the opportunity was. The chance to join such an impressive team and contribute on a scale that protects a broad and growing slice of society sealed the deal.

What’s your favorite threat group/malware and why?

I have some favorites from my DoD days, but all I can say about them is [REDACTED]. More recently, I’ve been intrigued by the Poweliks/Kovter malware families. Although the Department of Justice last year indicted several individuals that it accuses of operating Kovter, the malware has had a long run and remains active today. Despite being labeled “simply a footnote” as far back as 2014, Kovter has continued to proliferate and evade traditional signature-based antivirus using the same fileless techniques over a period of years. It has been interesting to observe the subtle changes in behavior and tactics over time, from novel persistence mechanisms to unique domain registration quirks like omitting the state from registration addresses.

What are your thoughts on attribution?

There are different degrees of attribution, and I think it is a topic where analysts need to tread lightly. The role of an intel team is to gather evidence and stick to the facts. While it may sometimes be tempting to make leaps of judgement and point a finger at a particular individual or organization, this can lead to speculation and wrongful accusation that may be damning in the court of public opinion. Things are not always as they appear, especially when you are working with limited data and facing deliberately deceptive adversaries. As analysts, we gather evidence, and, as scientists, we can and should classify that evidence in meaningful ways that associate similar tactics, techniques, and even behaviors. This classification helps inform response and provides clues to the motivation, sophistication, and inclination of our adversaries. However, our assessments should remain objective and generally stop short of assigning blame. Ultimate attribution should be left for judges and juries to decide.

How do you make threat intelligence actionable without simply creating indicator feeds?

Too often “threat intelligence” gets conflated with atomic indicators of compromise (IOCs). While sharing malicious domains and file hashes uncovered in an attack can and does help identify historical compromises, the future value of those IOCs—once shared—diminishes quicker than a new car driven off the lot. It’s all too easy for an adversary to flip a bit and change a hash or register a new domain to reroute a connection.

The real value of threat intelligence is not in the simple atomic IOCs, but in the tactics and the behaviors behind them. The most interesting details in a blog post or threat report are rarely in an IOC appendix at the end but are usually buried in the text (or even a screenshot) in the middle, where the author mentions how the adversary introduced the malware, gathered the data, or established a connection. This idea of looking for behaviors instead of IOCs is not new, and the entire endpoint detection and response (EDR) industry has grown up around it. But we still have a long way to go in how we share the intelligence we gather. Establishing a common language for sharing unique behaviors is a step in the right direction, and this is one of the reasons that MITRE ATT&CK™ is so valuable to the community.

Working closely with MITRE to incorporate this common language into resources like Atomic Red Team and Red Canary detectors is one of the ways that Red Canary is leading in this space.

Do you have a favorite threat intel report you’ve worked on or a good security story you can share?

[REDACTED].

What interested you about Red Canary?

I first came across Red Canary via Twitter feeds and research around MITRE ATT&CK. I was working on mapping detection logic to ATT&CK in my role as a threat hunter for a large company and kept seeing references to something called Atomic Red Team. Digging into it, I quickly realized both that there were there a lot of great detection ideas catalogued in the repo and that this whole idea of regularly testing detection logic was a sorely needed component in our security operations center (SOC). Incorporating our own atomic tests designed around our internal use-cases quickly revealed some coverage gaps that we thought we’d already addressed.

As time went by, I started following Red Canary closer, especially the blog posts, which are full of EDR gold—the kind of intelligence details that I could easily hunt for and turn into new detection logic.

What is your vision for the threat intelligence team at Red Canary?

At Red Canary we have an intelligence team, not a threat intelligence team. I draw the distinction because threats are just one component of the equation. Understanding adversarial goals, tactics, techniques and procedures (TTPs)—as well as their behavior and human tendencies—are all critical to building an intelligence program. They drive detection logic, provide context and consistency to analysis, and inform response. Curating intelligence dossiers on relevant threats and making that intelligence actionable for analysts and incident responders is one of the primary missions of the Red Canary intel team.

That said, there are other aspects of intelligence that are just as important. SANS has a slogan that sums this up: “Know normal; find evil.” Studying threat intelligence is a good way to find evil and identify documented abnormal behavior. However, a core focus of the Red Canary intel team is to proactively hunt for new threats. To do this, you really need to know what is normal, and, to know that, you need intelligence on yourself. What is my attack surface? What legitimate software and behavior is typical in my environment? What data or access do I have that may appeal to an adversary? These are questions we seek to answer to help inform our hunting hypotheses and drive detection. From an intel perspective, if you’re only thinking of adversaries’ known attacks, then you’re missing a piece of the puzzle.

 

Jeff Felling is a puzzle solver by choice and a threat hunter by trade. He spent nearly a decade analyzing malware, forensic artifacts, and other anomalies for the DoD. Prior to joining Red Canary in 2019, Jeff returned home to Indiana in 2016 and he helped create ORION,  Anthem, Inc.’s first organized threat hunting program. Jeff holds degrees in mathematics from Johns Hopkins University (MS) and Purdue University (BS), and is certified in security, incident handling, and forensic analysis through SANS.

 

Red Canary’s best of 2024

 

Infosec horoscopes: Astrology for SOC teams

 

The CrowdStrike outage: Detection and defense in depth

 

Reel in troves of data with webhooks

Subscribe to our blog

 
 
Back to Top