A security policy nobody reads protects nobody. Most security policies fail not because the rules are wrong but because they are written for auditors rather than the people who are supposed to follow them. AI can help produce policies that are clear, specific, and calibrated to the audience that needs to act on them.
Analysis Briefing
- Topic: Effective security policy writing using AI assistance
- Analyst: Mike D (@MrComputerScience)
- Context: An adversarial analysis prompted by Claude
- Source: Pithy Cyborg
- Key Question: What makes the difference between a policy that changes behavior and one that collects dust?
Why Most Security Policies Fail Before Anyone Reads Them
Security policies written for compliance purposes are optimized for the auditor, not the employee. They use legal language, passive voice, and abstract requirements. They are long, comprehensive, and boring. They sit in a shared drive, get acknowledged once during onboarding, and are never consulted again.
The employee who encounters a suspicious email does not think: let me check the acceptable use policy. They either act on instinct or ask a colleague. If the policy has not shaped their instinct through repeated, accessible communication, it has no effect on behavior.
The failure is not the content. Most security policies contain correct rules. The failure is the format: documents designed to demonstrate that rules exist rather than to help people follow them.
What a Behaviorally Effective Security Policy Actually Looks Like
An effective security policy is short, specific, and written in plain language. It tells the reader exactly what to do in the situations they actually encounter, not what the organization’s general posture is toward information security.
Instead of: “Employees shall exercise appropriate caution when handling electronic communications that may constitute social engineering attempts.” Write: “If you receive an unexpected request to transfer money or change payment details, call the requester directly using a phone number you already have before doing anything. Do not use contact information from the suspicious message.”
The second version is longer in words but shorter in reading time because it requires no interpretation. The person who reads it once knows exactly what to do.
| Feature | The “Auditor” Policy | The “Behavioral” Policy |
| Tone | Passive and Legalistic | Active and Instructional |
| Goal | Liability Protection | Risk Reduction |
| Accessibility | Hidden in a PDF | Pinned where work happens |
| Metric | “I Acknowledge” click-rate | Correct action during a test |
| Update Cycle | Every 3 years (or never) | Continuous / Annual |
How AI Speeds Up Policy Writing Without Replacing Judgment
AI can draft plain-language versions of technical security requirements faster than a human writer working from scratch. The process: describe the security requirement and the audience to the AI, ask for a plain-language version written for someone with no security background, then review and edit for accuracy and organizational context.
Claude and GPT-4o are both effective for this task. The output requires human review because AI-generated policy language may be clear but inaccurate about specific legal requirements, industry standards, or organizational specifics. Use AI for the writing and a human for the accuracy check, not the other way around.
The Risk of the “Genericism Trap”
While AI excels at translating “legalese” into plain English, it often defaults to industry averages. A policy drafted entirely by a prompt may miss the specific “tribal knowledge” that makes your business unique, such as the exact Slack channel used for reporting or the specific name of your wire transfer portal. AI provides the clarity, but you must provide the “local coordinates” to ensure the policy feels relevant to your specific office culture.
The Communication Layer That Makes Policies Work
A written policy is the starting point, not the finished product. The behaviors the policy requires need to be reinforced through regular communication: short reminders, scenario-based training, and immediate feedback when policy violations occur.
The most effective security awareness programs treat policy communication like marketing: repeated, varied, and designed to be remembered rather than merely acknowledged. A monthly one-paragraph email that describes a real attack scenario and connects it to a specific policy action is more effective than an annual all-hands training session.
The Living Document Problem
Security policies become stale. Threats evolve, tools change, and the specific guidance that was appropriate two years ago may be wrong today. A policy that recommends SMS MFA as a strong second factor is actively harmful if published after SMS MFA bypass became routine.
Assign policy ownership to a specific person and schedule an annual review. The review does not need to rewrite the policy from scratch. It needs to check whether the specific recommendations are still accurate given the current threat environment.
Why Onboarding Is the Highest-Leverage Policy Moment
New employees are the most receptive security policy audience you will ever have. They are motivated to understand the rules, they have not yet developed shortcuts and workarounds, and they are forming habits that will persist for years.
Most organizations use onboarding to present a compliance acknowledgment form. This is the lowest-value use of the highest-leverage moment.
A security onboarding session that takes 30 minutes and covers the three most important scenarios in plain language, what to do with a suspicious email, how to verify an unusual payment request, and how to report a potential incident, produces more durable behavior change than any amount of subsequent training.
The scenarios should be specific to your business. A freelancer platform has different risks than a medical practice. Generic security training covers generic risks. Onboarding that covers the specific threats your specific environment faces is what sticks.
How to Actually Measure Whether Your Policy Is Working
Most organizations have no idea whether their security policies are changing behavior. They know whether policies exist and whether employees have acknowledged them. They do not know whether employees can act on them correctly.
The simplest measurement is a scenario test: after policy communication, present a small group of employees with a realistic scenario and ask what they would do. No tricks, no grades. Just observe whether the policy guidance is accessible when they need it.
Phishing simulation platforms like KnowBe4 and Proofpoint Security Awareness Training send realistic phishing emails to employees and measure click rates and reporting rates over time. A downward trend in click rates and an upward trend in reports is behavioral evidence that awareness is improving.
Track your incident reporting rate. A security culture where employees report suspicious activity generates more reports than one where they do not. A low reporting rate does not mean nothing suspicious is happening. It means employees are not reporting it, which is a policy and culture problem with a policy and culture solution.
What This Means For You
- Rewrite your most important security policies in plain language using AI assistance. Start with the phishing policy and the payment verification policy. These cover the highest-risk scenarios most employees encounter.
- Test policy comprehension by asking a non-technical employee to read it and explain what they would do in a specific scenario. If they cannot, the policy needs rewriting.
- Publish policies where people actually work, not in a shared drive nobody navigates. Pin the phishing policy in your team Slack channel. Send it as an email. Put it where behavior happens.
- Schedule a policy review date when you publish. A policy with no review date will not be reviewed.
Enjoyed this deep dive? Join my inner circle:
- Pithy Cyborg → AI news made simple without hype.
