Skip Navigation
Get a Demo
 
 
 
 
 
 
 
 
 
Resources Blog Opinions & insights

Building security from the ground up as a team of one

Red Canary Digital Forensic and Incident Response (DFIR) strategist Phil Hagen explains how you might develop security capabilities from the ground up as a team of one.

Phil Hagen
Originally published . Last modified .

Anyone who’s worked in the security industry is familiar with the challenge of doing less with more. As a general rule, we seem to be overtasked and understaffed. How would someone at the toughest end of that spectrum—a team of just one—handle the challenges of a complex security landscape faced by any organization with a technology footprint?

So what if you are the sole internal resource assigned to a security role and tasked with protecting an organization? For the purposes of this piece, I’ll make some admitted assumptions: the organization will provide the time needed to implement and maintain a security program, there is a budget (but not an infinite one), and our “team”-of-one has the organizational authority to implement the plans laid out—provided they are not to the detriment of the mission or business line.

What are your protection priorities?

As with any project, regardless of the available resources, identifying your priorities is key. I always ask organizations that are building a security program an obvious question: “what are you most eager or motivated to protect?” If they’re unable to answer quickly, or have too broad a response, we’ve found the first step they need to take. If you don’t know what you’re protecting, in terms of impact to the lines of business, or if your answer is “everything,” no amount of effort in building a program will have any meaningful effect.

Whether it’s regulated data, customer records, or intellectual property, you absolutely must have a clear idea of what the rest of the program will be built around. That’s not to say other information isn’t important to protect, but a reasonable prioritization must be established first. Assuming the team-of-one has that information in hand, we can move on.

Where to start

The next step is to ensure you’re not reinventing the wheel. With as much as has been spent in the security industry over the past few decades, we can enjoy the byproduct of a lot of great frameworks and lessons learned that are either free or inexpensive. My favorite resource for this is the Center for Internet Security’s 20 Critical Security Controls. This list, which is continually updated (and versioned!), is a free resource that provides everyone with vetted, common-sense methods to improve their security. It’s important to note that the controls are not a checklist! Rather, they are a framework that allows security professionals to gauge their current or future program components against a common set of guidelines, ensuring they are focusing on the highest priority measures that will have the biggest positive impact on the organization’s posture.

The basic controls

I’d argue that the set of “basic controls” is pretty fundamental and should cover the actions that nearly any security program—including that of our notional team-of-one—needs to accomplish. There isn’t a whole lot of magic in this tier.

Know your inventory

Creating and maintaining an inventory of hardware and software is not glamorous, but if you don’t know what resources may hold or process the organization’s critical information, then it’ll be hard to have much of a positive impact. Whether this is a spreadsheet or the product of some kind of scanning or discovery toolkit, get a solid list of what’s in scope before going any further.

Limit admin access

Controlling administrator access is a key one—lots of bang for the buck here. While it’s often difficult to rein in previously-granted administrative permissions, it’s absolutely worth accomplishing. If your users have administrative rights, no realistic amount of control can be imposed on their systems. It’s that straightforward. Nothing else we might implement will matter if a user can override it to “play some new Facebook game” or install the latest free game downloaded from the evil corners of the Internet. The concept of least privilege is a core focus because it works.

Secure configuration

When that’s out of the way, we can make our way to enforcing secure configurations. Depending on the scope of the organization’s IT footprint, this can be a major task. This is often best done by first getting the servers’ configurations in line, then moving to network infrastructure, then on to the workstations. Even ensuring all workstations are configured to automatically apply locally-approved operating system updates (and implementing a plan to ensure servers get the same level of attention as mission requirements allow) is a huge step in the right direction.

Collect and store logs

Performing all of “maintaining, monitoring, and analyzing logs” is going a bit beyond what our team-of-one can likely handle, in my opinion. For this, though, I would insist that logs are being collected in a central location and retained for the maximum allowable duration. This will ensure that incident response, troubleshooting, and general research operations can be performed against a reliable data set. It’s not reasonable to expect a team-of-one (or even a larger group) to review each and every log entry on a periodic basis. If, at some point in the future, there is a budgetary and technical opportunity to apply some form of automated analysis here, such as a SIEM, then that’s a big win in this resource-starved scenario.

This is certainly not a first-stage solution, but rather an example of what capabilities can be layered onto first-generation capabilities such as centralized log collection. In any case, you’d want to be extremely careful not to be too aggressive with the signatures and other alert triggers implemented. Otherwise, you’ll be inundated with alerts that just end up getting ignored. Taking this back to the prioritization that was originally done, focusing the alerts on the resources that process the most important data is a good way to start shaping this to a usable level.

Vulnerability management

Continuous Vulnerability Management (CVM) is, in my opinion, one of the lower priority controls from the basic control group. The main reason I feel this way is that it may require a more complicated setup than the rest of the controls in this group, and the results can be somewhat overwhelming. I also think that a buttoned-up implementation of most of the rest of the basic controls can mitigate a good deal of what CVM is designed to address. Again, that’s my opinion—and if we’re talking about that proverbial team-of-one, there are other things that need more immediate attention.

Moving beyond basic

Within the 20 controls, things get a little muddier when you get beyond the basic group. This isn’t a piece about the controls themselves, but the value provided by the basic control group is too significant not to explore in detail. As for the other 16 controls, though, remember that some of these will support the protection of the organization’s critical data and resources (at varying degrees) and some will not. The idea is to prioritize those which will have the biggest positive impact protecting the resources that handle the organization’s most precious data, and maybe leave the others for periodic reconsideration.

With the true foundations laid above, you would have a pretty solid base on which any number of additional capabilities can be added. I’d also suggest that the very process of building a basic program from just the first six critical controls would establish an immense knowledge of the IT environment and the organization’s priorities and sensitivities. I also think the team-of-one would have a much better idea of where the voids in their sight picture would be, thereby suggesting where the next priorities for investment and attention are.

Increasing visibility

The opportunities for a security program at this more advanced stage of maturity are widely varied, and of more varied impact. In broad terms, I would suggest increasing visibility is the biggest value for the investment. There is a perpetual debate on whether network or endpoint provides a better value, and, frankly, where you stand in the argument is a matter of necessity and preference. For the purposes of this hypothetical scenario, let’s just say that both are important and consider them in alphabetical order.

Endpoint security

Endpoint solutions, which have come onto the security scene heavily in the past decade, provide a massive depth and breadth of visibility across the entire fleet of systems in an enterprise. There are many solutions at a variety of price points. I’ll point to one specifically that captures a great deal of useful endpoint telemetry, from the Twitter personality known as SwiftOnSecurity. She maintains a suggested sysmon configuration, which has become a great resource for security teams of all sizes. Data collected includes metadata from each process that is started, network connections, service actions, registry modifications, driver loads, filesystem operations, and more.

Even if our team-of-one can’t fully operationalize endpoint data right away, it serves as another log source that is useful for post-incident investigations, as well as a great data collection that will be helpful for building future capabilities in the endpoint space.

Network security

From a network perspective, I have two primary recommendations. First, collect NetFlow from the routing infrastructure. This session data does not include the contents of network communications, but is a summary of each communication made in the environment. Cloud-based providers offer this in various forms as well. This benefits our team-of-one by allowing them to quickly query months of network traffic summaries. Collecting network metadata is a great way to get fast answers to questions like, “did any of our hosts communicate with this recently-identified bad IP?” or “which hosts transmitted an extremely large amount of data out of the environment?”

The other big win from a network perspective is Network Security Monitoring (NSM) platforms. NSM provides a high-level accounting of various important network artifacts. This may include DNS queries and responses, URLs or hostnames of websites visited, and more. I strongly feel that the Zeek NSM is the best option in the field. Zeek has a free distribution suitable for small-to-medium sized organizations, and there is a commercial platform from Corelight—which also wrote Zeek—that scales to very large ones. Under either model, Zeek provides log files with artifacts from dozens of protocols, and there are numerous tools that can parse, visualize, and operationalize its content. Zeek is also integrated to many other open-source platforms such as Security Onion, which adds a great deal of additional functionality. While a full Security Onion deployment may be beyond the reach and needs of our team-of-one, it could be a good item on the roadmap for when more security team members come on board, or when the organization’s security apparatus is humming along without issue and new projects can be considered.

Tangentially…

One tangent that I didn’t touch on directly is when to take on these functions internally and when to find an external partner to help. There is no easy recipe for this, but I generally recommend a plan that leverages the priorities of information established at the outset of the process. Identify what functions will have the most positive impact on the organization’s most critical information. From the top of that list, identify those that can be handled internally and those that cannot. If it’s just not in the technical scope of our team-of-one to implement secure configurations across the enterprise, the task can’t just be left out to dry—they need help, since this is one of the baseline fundamentals. This would be a candidate for outside assistance. On the other hand, if our team-of-one is well-versed in deploying these configurations, there’s no need to shift part of the budget to what they can handle themselves.

Conclusion

Make no mistake about it: operating a security team-of-one is a daunting proposition. However, by doing a little bit of strategic planning, focusing on the basics, and building capabilities where they will have the most impact, even a solo security “team” can put together a great program that addresses the organization’s most important requirements.

 

Why CISOs under consolidation pressure are embracing Microsoft Security solutions

 

How AI will affect the malware ecosystem and what it means for defenders

 

Why Taylor Swift fans should work in cybersecurity

 

Drawing lines in the cloud: A new era for MDR

Subscribe to our blog

 
 
Back to Top