Introduction
Most security programs have a graveyard of controls that nobody questions. Quarterly access reviews that take 40 hours of analyst time and produce zero remediations. Vulnerability scans that run on schedule and feed a backlog nobody works. Phishing simulations that measure click rates but never connect to actual incident data. These controls exist because someone added them, they passed an audit, and now they are part of the program. Removing them feels like a risk. So they stay.
This is ceremonial security. It is not a failure of intent. It is a failure of measurement. Your team is busy performing security theater for auditors, for compliance frameworks, and for the quarterly board slide that shows green across every control category. Meanwhile, the controls that actually reduce risk are underfunded, understaffed, or missing entirely. The ceremony crowds out the substance.
The hard question is not whether your controls are documented. It is whether they are working. And working means something specific: measurable reduction in likelihood or impact of a real threat. If you cannot answer that question for every major control in your program, you have ceremonial security. This article is about how to find it, measure it, and replace it with something that actually moves the needle.
Browse the Full Cybersecurity Market: 118 Categories, 9,000+ Tools.
Why Ceremonial Controls Survive Budget Cuts and Headcount Freezes
Ceremonial controls are sticky because they are visible. They produce artifacts: reports, tickets, completion percentages, audit evidence. Auditors love them. Compliance frameworks reference them. Your GRC tool shows them as green. That visibility creates organizational inertia that is very hard to overcome, especially when you are trying to justify cutting something that looks like security.
The real driver is risk aversion in the wrong direction. Security leaders fear the conversation where they say 'we stopped doing X' and then X becomes the attack vector. So controls accumulate. Each one made sense when it was added. None of them get retired. The program grows heavier every year, and the team spends more time maintaining the ceremony than building actual defenses.
Budget pressure accelerates this problem. When cuts come, leaders protect the visible controls because they are the ones tied to compliance requirements and audit findings. The invisible work, threat hunting, detection tuning, architecture review, gets cut first. You end up with a program that looks complete on paper and is hollow in practice.
The Four Categories of Ceremonial Security (And How to Spot Them)
Not all ceremonial controls look the same. They fall into four patterns that most mature programs share:
Compliance theater: Controls that exist to satisfy a framework requirement but are scoped so narrowly they cover almost nothing. A SOC 2 access review that covers 12 production accounts but ignores 400 developer workstations. A penetration test that tests the same three applications every year because they are the ones in scope.
Metric washing: Controls that produce numbers without producing insight. Phishing simulation click rates reported to the board without any correlation to actual credential compromise incidents. Patch compliance percentages that count patches applied but not patches that matter.
Ritual reviews: Recurring meetings and reports that consume analyst time but have no decision rights attached. A weekly vulnerability review where the same 200 critical findings have been sitting for six months because nobody owns remediation.
Vendor-driven activity: Controls that exist because a vendor sold you a product and the product needs to be used to justify the contract. Running daily DLP scans that generate 10,000 alerts and result in zero policy changes.
The common thread is that none of these controls have a clear answer to the question: what would be different if this control failed? If the answer is 'nothing would change,' you have found a ceremonial control.
How to Audit Your Own Program for Controls That Do Not Reduce Risk
Start with your control inventory. If you do not have one, that is the first problem. Every control in your program should have an owner, a threat it addresses, a measurement method, and a last-reviewed date. Most programs have the first two. Almost none have the last two.
For each control, ask three questions. First: what threat scenario does this control reduce? Be specific. Not 'insider threat' but 'privileged user exfiltrating customer data via USB.' Second: how do you know the control is working? Not 'it ran' but 'it detected or prevented X incidents in the last quarter.' Third: what is the cost of this control in analyst hours, tool spend, and opportunity cost?
Run this exercise with your team leads, not alone. Your analysts know which controls are theater. They have been performing them for years and they know which ones produce nothing. Give them permission to say so. You will surface more ceremonial controls in one working session than in six months of solo analysis.
The output should be a simple scoring matrix:
| Control | Threat Addressed | Measurable Outcome | Annual Cost (Hours + $) | Recommendation |
|---|---|---|---|---|
| Quarterly access review | Privilege creep | Zero remediations in 4 quarters | 160 hrs + $0 tool | Redesign or retire |
| Weekly vuln scan | Known exploits | 12% patch rate on criticals | 40 hrs + $18K/yr | Fix remediation process |
| Phishing simulation | Credential phishing | 8% click rate, no incident correlation | 20 hrs + $12K/yr | Connect to IR data or cut |
Access Reviews: The Control Everyone Hates and Nobody Fixes
Access reviews are the canonical example of ceremonial security. Every compliance framework requires them. Every security team dreads them. And in most organizations, they produce almost no actual access changes. A 2023 survey of mid-market security teams found that fewer than 15% of access review findings resulted in access being revoked within 30 days. The rest sat in ticketing systems or were marked 'accepted' by managers who did not understand what they were approving.
The problem is not the control. The problem is the design. Most access reviews are point-in-time snapshots that ask managers to certify access they did not grant and do not understand. The manager clicks approve because the alternative is a conversation they do not want to have. The auditor sees a completed review. Nothing changes.
A working access review has three properties the ceremonial version lacks. It is continuous, not quarterly. It is scoped to high-risk access, not all access. And it has automatic remediation for clear violations, not a ticket that waits for a manager to act. If your access review does not have all three, you are performing a ritual, not managing risk.
The fix is not more tooling. It is redesigning the control around outcomes. Define what 'access that should not exist' looks like. Build detection for it. Automate revocation for the clear cases. Reserve human review for the ambiguous ones. That is a control that reduces risk. The quarterly spreadsheet is not.
Vulnerability Management Theater: When Scanning Is Not the Same as Fixing
Most organizations scan for vulnerabilities on a regular cadence. Most organizations also have a critical vulnerability backlog that has been growing for years. These two facts coexist because scanning and remediation are treated as separate programs with separate owners and no shared accountability.
The scan runs. The report goes to the security team. The security team sends tickets to IT or engineering. The tickets sit. The next scan runs. The backlog grows. The board slide shows 'vulnerability management: active' and the CISO knows the number is meaningless.
Vulnerability management is only a real control when it has a closed loop: scan, prioritize by exploitability and business impact, assign with SLAs, track to closure, and measure mean time to remediate by severity. If any link in that chain is broken, the scan is theater. You are measuring your exposure, not reducing it.
Prioritization is where most programs fail. Scanning everything and treating all criticals equally means your team is working on a CVSS 9.8 vulnerability in a dev environment while a CVSS 7.2 with a public exploit sits in a customer-facing system. Risk-based prioritization, using threat intelligence and asset criticality, is not optional. It is the difference between a control that works and one that produces reports.
Phishing Simulations: Measuring the Wrong Thing for a Decade
Phishing simulations became a standard control because they are easy to run, easy to report, and satisfy security awareness requirements in most compliance frameworks. The click rate metric is clean and boardroom-friendly. It is also almost entirely disconnected from actual phishing risk.
Here is what click rate does not tell you: whether your email security controls are catching real phishing attempts, whether users who click simulations are the same users who click real attacks, whether your incident response process for reported phishing is working, or whether credential compromise from phishing is actually declining. You have been measuring a proxy metric for ten years and calling it a security outcome.
A phishing simulation program that reduces risk looks different. It connects simulation data to actual incident data. It measures reporting rates, not just click rates. It tracks whether users who receive training change behavior on subsequent simulations. And it feeds into email security tuning, so the controls that catch real phishing get better over time. If your program does not do these things, you are running a compliance exercise, not a security control.
How to Have the Conversation With Your Board About Controls That Are Not Working
Your board does not want to hear that controls they approved and funded are not working. But they need to hear it, and the way you frame it determines whether you get support or skepticism.
Do not lead with the failure. Lead with the opportunity. 'We have identified $400K in annual spend on controls that are not producing measurable risk reduction. Redirecting that spend to detection and response would reduce our mean time to detect from 21 days to under 5.' That is a business case. It is not a confession.
Bring the data. Show the control, the cost, the expected outcome, and the actual outcome. Show what you would do instead and what outcome you expect. Boards respond to specificity. 'We are replacing a quarterly manual access review with continuous automated monitoring. Same compliance coverage, one-third the analyst time, and actual revocations when violations are found.' That is a decision they can make.
Expect pushback on compliance risk. Have your answer ready. Most ceremonial controls can be replaced with controls that satisfy the same framework requirement while actually reducing risk. The access review example above satisfies SOC 2 CC6.3 whether it is quarterly and manual or continuous and automated. Know your framework requirements well enough to make that argument.
Building a Program That Measures Outcomes, Not Activity
The shift from ceremonial to substantive security is a measurement problem before it is a tooling problem. You need to define what 'working' means for each control before you can know whether it is working.
Start with your top five threat scenarios. Not MITRE ATT&CK categories. Actual scenarios relevant to your business: ransomware hitting your ERP system, a developer pushing credentials to a public repo, a third-party vendor being compromised and pivoting into your environment. For each scenario, map the controls that are supposed to prevent or detect it. Then ask whether those controls have ever actually detected or prevented anything resembling that scenario.
This exercise will surface gaps faster than any maturity assessment. You will find threat scenarios with no detective controls. You will find controls that are supposed to cover a scenario but have never been tested against it. You will find scenarios where your entire defense is a single preventive control with no detection layer behind it.
The goal is a program where every dollar and every analyst hour is traceable to a threat scenario and a measurable outcome. That is not a perfect program. It is a defensible one. And it is the kind of program that survives budget scrutiny, board questions, and the inevitable incident that tests whether your controls are real.
Frequently Asked Questions
Most compliance frameworks specify the outcome, not the implementation. SOC 2, ISO 27001, and NIST CSF all allow you to satisfy a control requirement with a different mechanism as long as you can demonstrate the intent is met. Document the replacement control, map it to the framework requirement, and get your auditor to agree before you make the change. Auditors generally prefer controls that work over controls that are ceremonial, as long as the paperwork is clean.
Conclusion
Ceremonial security is not a character flaw. It is what happens when programs grow without measurement discipline, when compliance drives investment decisions, and when removing a control feels riskier than keeping one that does not work. Every mature program has some of it. The question is whether you are willing to look for it, measure it honestly, and make the case for change. The leaders who do this work end up with smaller, faster, more defensible programs. The ones who do not end up explaining to their board why the incident happened despite all the green controls on the dashboard.
Stop Guessing About Vendor Health. Start Querying It with MCP.
