Skip Navigation
Get a Demo
Resources Blog Security operations

How cloud architects and detection engineers can work together

Listen to Brian Davis and Thomas Gardner discuss how security practitioners and software engineers can collaborate in the cloud instead of butting heads.

Thomas Gardner Brian Davis
Originally published . Last modified .

Two Canaries—Brian Davis, Principal Software Engineer, and Thomas Gardner, Senior Detection Engineer—recently joined Dave Bittner on CyberWire-X, N2K CyberWire’s podcast, to discuss the relationship between cloud architects and detection engineers within an organization. After defining each of their roles, Brian and Thomas underscore the many ways cloud architects and security practitioners can work together to share mutually beneficial insights.

Listen below or read the transcript, which has been edited for clarity.


Dave Bittner: Hello everyone, and welcome to CyberWire-X, a series of specials where we highlight important topics affecting security professionals around the world. I’m Dave Bittner. In today’s program, we delve into the dynamic and increasingly critical fields of cloud architecture and cybersecurity detection. Our focus today bridges the nuanced roles of cloud architects and detection engineers, two vital cogs in the machinery of modern digital infrastructure and security.

We’re joined by Brian Davis, a Principal Software Engineer with a wealth of experience in cloud architecture, and Thomas Gardner, a senior detection engineer known for his expertise in identifying and mitigating cyber threats. Brian and Thomas are both from Red Canary, our show’s sponsor. Together, they’ll shed light on the symbiotic relationship between their roles. We will dive into how detection engineers distinguish normal administrative activity from potential intrusions and what behaviors and patterns they vigilantly monitor in customer environments. Bringing Brian and Thomas together offers a unique perspective on how these roles interact, challenge, and ultimately support each other’s objectives in the digital world.

So today we are talking about, kind of contrasting, this notion of cloud architects versus detection engineers, and we want to start off with some definitions here. Why don’t we go through these one by one. Can we start off with a cloud architect? And for folks who aren’t familiar with that, how do you describe it?

Brian Davis: Oh, that’s a fantastic question, and I always struggle to answer that actual question. So in my mind, a cloud architect is someone that knows how to use the tools of the cloud, whatever cloud platform is your favorite, to build the applications, to build the things that you want to build. What I do is I work a lot with the other engineers that we have on our team to help them to build the system in such a way that it will scale well as we grow in such a way that it’s resilient, and kind of knowing the landscape of what the different tools are that are in our toolbox. And so, my focus is looking across scalability, looking across resiliency, and making sure that what we’re building can withstand all of that. The cloud part of that is just to use those cloud-based tools to enable those features.

Dave Bittner: So in your estimation, what’s the background that goes into somebody being a successful cloud architect?

Brian Davis: That’s another great question. I think at least for me, a lot of it is that I’ve built a lot of stuff over a lot of time. I’ve built them without using the cloud. So I know the ways to do it and the on-prem concept, and I’ve also built a lot of these things within the cloud. I think a lot of it is battle scars and lessons learned from either doing it the wrong way or doing it a bad way to know that there are better ways to do it. And so, I think a lot of it has to do with learning, again, the tools that are available within the cloud platform. Understanding the tools quite a bit, but also a lot of experience in building previous systems and knowing ways to do it and ways not to do it.

Dave Bittner: Well, and with Thomas, in this corner, we have a detection engineer. Let’s do the same thing with that job title. How do you describe that to someone who might not be familiar with it?

Thomas Gardner: As a detection engineer, I’m really responsible for researching attacker behavior, breaking it down into manageable pieces, and then communicating it to people on my own team, people on another team, customers. At Red Canary, we’ve built our own detection engines, built a few detection engines, in fact. There’s many ways to be a detection engineer. A lot of companies will use their own SIEMs or build on top of custom rules in their EDRs to do it.

I think the core of detection engineering is really understanding attacker behavior and breaking it down into manageable pieces that can then be, essentially, detected later on. There’s some overlap with threat hunting. It’s pretty common to take threat hunts as outputs and turn them into automated detection rules. There’s some overlap with incident response. Once you have an incident and you’ve understood what happened, how an attacker got in, what behavior they went into afterward, then you want to make sure that doesn’t happen again. And so, you might build automatic detection rules after that, and detection engineering is really focused on taking output from these other sort of disciplines in cybersecurity and trying to scale it and ensure that bad things don’t happen again, or you get ahead of adversaries before they get into your network.

“The core of detection engineering is really understanding attacker behavior and breaking it down into manageable pieces that can then be detected later on.”

Dave Bittner: The relationship between these two positions, you’ve got your cloud architect; you’ve got your detection engineer. Is this, by nature, an adversarial relationship?

Brian Davis: It’s funny you asked that. We were actually talking about that before we started talking with you. No. I don’t think it’s adversarial at all. I think what we can do together is understand how each of us do our job, and that’s really critical, right? Because Thomas and detection engineers are out there looking for threats in the cloud landscape, in the cyber landscape, and some of the actions that folks on the engineering teams, such as cloud architects and software engineers, are doing can look like threats. And so, what you need to do is have a regular conversation to understand, oh, this is normal behavior. This isn’t something that an adversary is necessarily doing. They might do something that looks like that, but that conversation enables us both to understand each other’s space a little bit more effectively. I don’t know, Thomas, if you feel the same way.

Thomas Gardner: Absolutely, I do. I think one of the differences between detection engineering and just rule creation is being able to put actions into a wider context. It’s really important as a detection engineer for me to understand the full attacker life cycle of how they break into things; how they persist in environments; how they escalate privileges and sort of what that chain of events looks like. It’s very rare to see cloud architects do exactly all of that in exactly that order. Not saying it doesn’t happen, but understanding how Brian does his job, why he does certain things—the example that I like to give is interactively logging into like a Kubernetes pod and then running recon-looking commands. Turns out cloud architects love doing that.

Brian Davis: I’m not sure we love doing it. It’s sometimes a necessity.

Thomas Gardner: But there’s a good reason for why they would do that. Typically, troubleshooting during an incident or trying to set up some sort of finicky application or something. Having a good relationship with our cloud architects really helps us put that sort of stuff into context. I can go up to Brian and ask him: Hey, we saw you do this. Why did you do this? What were the things you were after? And then we can go back and compare it to known adversary behavior that looks similar and just try and identify the differences so that we can put our own detections into better context and really improve the product we give.

Dave Bittner: How much of this is just kind of keeping in regular touch with each other to give each other a heads up and say, hey, listen, we’re going to be doing such and such today. So if you see something, that’s probably what it is. Having those lines of communication open?

Thomas Gardner: I think the more you have those lines of communication, the less chance you’re going to have a false alarm in that respect. But with as many engineers as any organization has, it’s really easy to miss that communication and send someone off on a wild goose chase because you forgot to say, oh, hey, by the way, I’m going to go open up permissions on this bucket because I’m testing something out. It’s really easy to forget that, and anything having to do with a human notifying another human, it’s going to get missed. And so, I think where you can have that communication, it’s critical. But it’s not always there, unfortunately.

It can actually be nice that it’s not always there, too. Being able to sort of test some assumptions that we have about our own detections and doing so without knowing ahead of time what cloud engineers are up to and having to work our way back from our detection and put ourselves in our customers shoes to really have to analyze our own work output is a really helpful exercise for us to make sure that we are challenging our assumptions about what’s truly attacker behavior and what’s just sort of general cloud behavior. You know, there’s a reason you can open buckets up to the entire internet, like there’s legitimate reasons to do that. It’s not only a bad thing, and it’s not often a bad thing. And so, sometimes not having a heads up forces to us challenge our own assumptions, and that can be a really helpful exercise.

Dave Bittner: Some accidental red teaming?

Thomas Gardner: Yeah. Great way to put it.

Dave Bittner: Right. Opportunistic red teaming.

Dave Bittner: I’m curious. How do you strike that balance between needing to keep up with—what I think is fair to say—an ever-increasing cadence, right? I mean, nobody is going to claim that the attackers are slowing down, right? I think the opposite is true. But, Thomas, from your point of view, you don’t want to be the department that’s always crying wolf; you don’t want to be pestering the cloud engineers, as you say, with false alarms. How do you strike that balance between the two?

Thomas Gardner: Oh, that is a big question. That is a great question. We always strive for more specificity in areas like the cloud where we’re all learning new things about it. Even the cloud architects are learning new things about it. We tend to start pretty broad with some assumptions, and as we learn things, we constantly try and revisit those assumptions, like I was saying before. This is where putting that behavior into context really comes in handy because if we can say that a certain action happens, but you need these three other things around it for it to really be bad, and if we can translate that in our detector logic so that we quiet that idea down ahead of time without requiring a human to validate those things, we will tend to be faster. We’ll tend to be able to communicate specific threats better, and we’ll just generally be happier because we’re not constantly dealing with a bunch of manual labor trying to validate our own work.

Brian Davis: To expand on what Thomas said, the context is really key. We’ve spent a lot of our time at Red Canary working on EDR, which is endpoint focused. And in an endpoint, you’re working on a single computer somewhere. And granted, there is lateral movement between machines and things of that nature. But at the end of the day, you’re looking at processes that are executing on a single computer, and the context is what’s going on in that computer. There’s more to be gained there, but just looking at the activity on that computer can give you a lot of insight into what’s happening because there are certain patterns that adversaries will follow.

When you step back to the cloud, you’re almost never dealing with a single computer, and you’re probably not dealing with a single cloud service. And so, now you can’t go with one piece of information because that one piece of information might come from an engineer or cloud architect or someone else with privileged access doing something that they’re supposed to be doing. So you have to gain more context in order to figure out what is a false alarm and is something to care about.

“When you step back to the cloud, you’re almost never dealing with a single computer, and you’re probably not dealing with a single cloud service.”

That’s one of the things that we’ve really worked hard at, trying to assemble more of that context for the detection engineering team so that they have all of the information to say: Oh, well, they did A and then B and then C. That’s not something that our engineering team usually does. That’s probably an adversarial relation, an adversarial behavior. Context has been one of the biggest challenges that we’ve had—providing that insight so that we don’t cry wolf all the time, so that we know what’s really dangerous behavior versus normal behavior because they can look the same if you’re looking through a small aperture.

Dave Bittner: I want to wrap up with you guys with this question, and I’m curious, and an answer from each of you from your individual perspectives: What’s your recommendation to somebody who is starting down this journey? You know, who is going to be having this relationship between the cloud architect and the detection engineer within an organization? Let me start with you, Brian, from the cloud architect’s side. Any tips or words of wisdom for how to get the most out of this relationship?

Brian Davis: That’s a fantastic question. I think it starts with assuming good intent on all parties, and that’s a good thing to go for in anything, in any in any relationship that you have. But knowing that everyone has a job to do, and there’s also so much information and so much stuff to learn, that not everyone has a full understanding of all the activities that are going on. And so, if anything comes off as confrontational, and if anything comes off as accusatory or sounds that way, assume it’s not and have the conversation and establish that relationship. Because if you start with good intent and you assume a good intent on the opposite party, you can find out that they have a difficult challenge—a difficult job—to achieve as well, and you’ll start to build more bridges that way.

Dave Bittner: Thomas, how about your perspective?

Thomas Gardner: I think it’s very easy as a security practitioner to say no or invalidate the actions of other people a lot by pointing out something is not the most secure way of doing things or it’s not the recommended way of doing things. Tying to avoid that habit, trying to basically view your coworkers’ actions as valid, even if they maybe don’t make sense to you, understanding their intent, and treating them as a normal way of operating is the best place for a detection engineer to start.

There are so many times where we get confused looking at certain behavior thinking, why would you do that? This is what we know attackers do. This is how you misconfigure systems. And especially in the cloud, when cloud providers give all kinds of APIs and build them for legitimate reasons, I think it’s really important to view the use of any of these APIs or actions as legitimate and valid ways of operating. And so, as a detection engineer, you need to be able to separate those valid things that a cloud architect is going to do, like logging into a Kubernetes pod interactively, opening a bucket publicly, creating some sort of access key for a service account in the cloud. You need to view those as legitimate business operations and not just assume ill intent, essentially.

Dave Bittner: Yeah, it’s that, I mean, it’s that classic, you know, practically a stereotype, to not be the “Department of No.”

Thomas Gardner: Exactly.

Dave Bittner: And that wraps up our episode of CyberWire-X. Our thanks to Brian Davis and Thomas Gardner from our show sponsor, Red Canary, for joining us. And thanks to you for listening. I’m Dave Bittner. We’ll see you back here next time.


The unsung security benefits of cloud migration


The role of GenAI in Red Canary’s security evolution


What to consider when evaluating EDR


Accelerating identity threat detection and response with GenAI

Subscribe to our blog

Back to Top