Introduction
Building a detection program with five people sounds like a constraint. It is actually a forcing function. Small teams cannot afford redundancy, ceremony, or tools that require a dedicated operator. Every decision has to count.
Most detection programs fail not because of headcount but because of scope creep. Teams try to detect everything, alert on everything, and respond to everything. The result is a queue nobody trusts, analysts who are burned out by month three, and a SIEM that costs $400K a year and produces noise. Five people cannot run that program. They should not try.
What five people can do is build a detection program that is scoped to your actual threat model, tuned to your environment, and designed to degrade gracefully when someone is out sick or on vacation. That requires different thinking than what most detection frameworks assume. This article is about that thinking.
Browse the Full Cybersecurity Market: 118 Categories, 9,000+ Tools.
Start With Your Threat Model, Not a Detection Framework
Every major detection framework, MITRE ATT&CK included, was built to describe the full universe of adversary behavior. It was not built to tell a five-person team what to detect first. Using ATT&CK as a detection backlog is how teams end up with 400 open detection engineering tickets and zero completed ones.
Before you write a single detection rule, answer three questions: Who actually targets organizations like yours? What do they do after initial access? What data sources do you already have that would show that activity? The answers narrow your scope from hundreds of techniques to a manageable set of 20 to 30 that matter for your environment.
Your threat model does not need to be a 60-page document. A one-page matrix mapping threat actor profiles to your crown jewels to your existing telemetry is enough to make prioritization decisions. That document also travels well to board conversations when you need to explain why you are not detecting everything.
The Rule of Thirds Applied to a Five-Person Detection Team
A five-person detection team needs three functional roles covered, not five different job titles. Think in thirds: detection engineering, alert triage and response, and program management or risk translation. You will not have a perfect split. Someone will wear two hats. The goal is to make sure no function is completely uncovered.
The breakdown that works in practice looks like this:
- 2 detection engineers: rule writing, data source onboarding, coverage gap analysis
- 2 analysts: triage, investigation, escalation, runbook execution
- 1 program lead: metrics, vendor relationships, board reporting, roadmap ownership
The program lead role is the one most small teams skip. They hire five analysts and wonder why the board never understands what the team does. Someone has to translate detection coverage into business risk language. That is not a part-time job you add to an engineer's plate.
Coverage Over Depth: How to Scope Detection for a Small Team
Depth-first detection means building highly tuned, low-noise detections for a narrow set of techniques. Breadth-first means covering more techniques with higher false positive rates. Small teams need coverage-first thinking, but with a twist: you only cover the techniques that map to your threat model.
A practical coverage model for a five-person team targets three tiers:
- Tier 1 (always-on, high confidence): 10 to 15 detections covering your highest-probability attack paths. These should have false positive rates below 5% and documented runbooks.
- Tier 2 (monitored, medium confidence): 20 to 30 detections covering secondary techniques. Reviewed weekly, not in real time.
- Tier 3 (logged, not alerted): Everything else. You collect the data. You do not alert on it. You query it during investigations.
Tier 3 is where most teams waste analyst time. They alert on everything they can see. A five-person team cannot sustain that. Log it, index it, and pull it when you need it. That is not a gap in your program. That is a deliberate architectural decision.
Tool Selection When You Cannot Afford a Dedicated Operator
Every tool in your detection stack needs to answer one question before you buy it: who runs this when your best engineer is on vacation? If the answer is nobody, you have a single point of failure, not a capability.
Small teams should bias toward platforms over point solutions. A SIEM that requires a dedicated content engineer to stay tuned is a liability at this team size. Managed detection and response vendors, cloud-native security tooling with built-in detections, and SOAR platforms with pre-built playbooks reduce the operational burden without reducing coverage.
The total cost of ownership conversation matters here. That vendor's TCO calculator conveniently leaves out the 0.5 FTE your team will spend maintaining their integration. When you are evaluating tools with a five-person team, add a line item for operational overhead. If a tool costs $80K per year but requires 20% of an engineer's time to maintain, the real cost is closer to $110K. That math changes the decision.
Metrics That Actually Tell You If Your Detection Program Is Working
Mean time to detect and mean time to respond are the metrics your board has heard of. They are also the metrics most easily gamed. A team that closes alerts without investigating them will have excellent MTTD and MTTR numbers and zero actual detection capability.
The metrics that tell you whether your program is working are harder to collect but more honest:
- Detection coverage ratio: what percentage of your Tier 1 threat model techniques have an active, tested detection?
- Alert fidelity rate: what percentage of alerts result in a confirmed true positive or a documented false positive disposition?
- Runbook coverage: what percentage of your Tier 1 detections have a documented response runbook?
- Detection age: how many of your active detections have not been reviewed or tested in more than 90 days?
Detection age is the entropy metric. Controls degrade. Environments change. A detection written for your old Active Directory environment may not fire correctly after your Azure AD migration. Reviewing detection age quarterly is how you catch drift before it becomes a gap.
Automation Is Not Optional at This Team Size
Five people cannot manually triage 200 alerts a day and also write new detections and also respond to incidents and also report to the board. Something will get dropped. Without automation, it is usually detection engineering and reporting, which means your coverage stagnates and your leadership has no visibility.
The automation investments that pay off fastest for small teams are alert enrichment, not alert generation. Automatically pulling asset context, user risk scores, and threat intelligence into an alert before an analyst sees it cuts triage time by 40 to 60% in most environments. That is the difference between an analyst handling 15 alerts a day and handling 25.
SOAR is worth the investment at this team size, but only if you start with three to five playbooks and actually finish them before buying more. Most teams buy a SOAR platform, build 40% of a playbook, and then move on. A half-built playbook does not save analyst time. It creates a maintenance burden.
What to Tell the Board When They Ask Why You Are Not Detecting Everything
Your board will eventually ask why a threat actor used a technique you did not detect. The wrong answer is a technical explanation of why that technique is hard to detect. The right answer is a risk-based explanation of why you prioritized other techniques first.
Frame your detection program as a portfolio, not a checklist. You have made deliberate investments in the highest-probability attack paths against your specific environment. You have accepted residual risk on lower-probability techniques because the cost of covering them exceeds the expected loss. That is a business decision, not a security failure.
Bring a one-page coverage map to board meetings. Show which threat actor profiles you are covered against, which techniques you detect in each stage of the attack chain, and what your current gap remediation roadmap looks like. Boards do not need to understand MITRE ATT&CK. They need to understand that you have a plan and that you are executing against it.
Building Resilience Into a Program That Cannot Afford Redundancy
A five-person team has no redundancy by default. One person out sick during an incident is a 20% capacity reduction. Two people out is a crisis. Resilience has to be designed in, not hoped for.
Runbooks are your primary resilience mechanism. Every Tier 1 detection should have a runbook that a competent analyst can execute without asking the detection engineer who wrote the rule. If your runbooks require tribal knowledge to use, they are not runbooks. They are notes.
Cross-training is the second mechanism. Each analyst should be able to perform basic detection engineering tasks: querying raw logs, modifying an existing rule, and onboarding a new data source. Each engineer should be able to triage and close a Tier 1 alert without escalating. You are not building specialists. You are building a team that can cover for each other when it matters.
Frequently Asked Questions
Personnel costs will dominate. Five security engineers and analysts in a mid-market environment typically run $600K to $900K in fully loaded compensation. Tooling should be budgeted at 20 to 30% of personnel cost, which puts you in the $120K to $270K range for SIEM, EDR, threat intelligence, and SOAR. If your tooling budget is significantly higher than that ratio, you are probably over-tooled for your team size.
Conclusion
A five-person detection program is not a scaled-down version of a 20-person program. It is a different kind of program, one built on deliberate scope, automation, and resilience by design. The teams that succeed at this size are the ones that resist the pressure to detect everything and instead build deep, reliable coverage against the threats that actually matter to their business. That is a harder argument to make internally than buying more tools. It is also the argument that produces a program your team can actually sustain.
Stop Guessing About Vendor Health. Start Querying It with MCP.
