My Product Security Principles
In my recent job search I read dozens of Product Security job descriptions. They all contain the same buzzword soup: shift-left, secure-by-default, defense in depth, paved roads. In practice, they mean different things at different companies — but what do they actually mean for the Product Security team?
What follows is my personal operating frame. One security engineer for every hundred in engineering is roughly where the industry sits — these are my principles for operating and succeeding in that reality. And I believe they hold in a world of vibe-coded apps and AI-accelerated production code.
1. Risk is the unit of work, not findings
Everything flows from business risk. Vulnerabilities, architecture gaps, compliance requirements — they’re all risks to be scored, prioritized, and decided on, and they belong in a risk register that the business actually owns — and part of the security team’s job is ensuring it does. Not a document security maintains in isolation, but a living record that business owners understand, contribute to, and sign off on.
Proactive and continuous risk reduction is how I formulate the security team’s mission.
2. Frame risk in business terms
A CVSS score means nothing to an executive. A risk item must answer: what’s the realistic scenario, what does it cost if it happens, what does it cost to fix, what’s your recommendation? When security decisions carry significant business risk, frame them in business language.
Seek explicit executive sign-off for security exceptions and risks above a materiality threshold — it moves accountability to where the decision lives.
Risk = Severity × Potential Impact — the key formula that turns a vulnerability into a business decision.
3. Security Architecture
A whiteboard conversation at design time costs an hour. A redesign after implementation costs a sprint. Security belongs at the beginning of the design process, not at the end as a gate. The goal is to be the person engineers call when they’re designing, not when they’re shipping.
Don’t overthink threat modeling. Formal frameworks have their place, but if the overhead of the methodology is slowing teams down, drop it. A napkin sketch of trust boundaries and a list of “what could go wrong” is a threat model. An imperfect one done at design time beats a rigorous one that never happens.
Good security architecture is transparent. Document it publicly — it builds trust, and it’s the right counterweight to security through obscurity. If the design is sound, exposure doesn’t weaken it. If your code ever leaks, there should be no secrets in it worth finding.
4. Assume controls fail — design and test for it
The operating posture is proactive, not reactive — find the gaps before an attacker or a customer does. No single control holds forever. The question after every architectural decision is: when this fails, what’s the worst reachable outcome? Isolation, least privilege, and short-lived credentials aren’t redundancy — they’re blast radius reduction. Treat defense in depth as a system property.
Designing for failure isn’t enough — validate that your controls actually perform as designed. Security audits, red and purple team exercises, and bug bounty program all serve the same function: actively probing your own assumptions.
5. Friction is the enemy
Culture is a big one. Most engineers want to build secure software — they’re just operating under deadlines, competing priorities, and finite cognitive bandwidth. When security loses, it’s usually not because engineers don’t care; it’s because the secure path was harder than it needed to be, or they simply didn’t know what it was. Security expertise isn’t a given — engineers are experts in their domain, not ours.
Every process, template, and gate should make the secure choice the default, not the tax. Security has to live inside the workflows engineers already use — a separate system they have to visit is a system that will fail adoption. Friction reduction is the mechanism; the goal is cultural — security becoming a natural part of how the team ships.
6. Influence over formal authority
Security teams often have no direct power over engineering decisions. Authority comes from technical credibility, consistent judgment, and being right often enough that people seek your input. A security control engineers chose is worth more than one you mandated.
Influence runs in both directions. Top-down: executive sponsorship sets the tone and makes security non-negotiable at the policy level. Bottom-up: invest in building relationships with engineering teams — understand their roadmaps, empathize with their pressures — that’s where actual adoption happens.
7. Partners, not adversaries
Competing with engineering for resources — security work versus features on the roadmap — comes with the territory. That tension is structural and it never fully goes away. Recognizing it as part of the job is healthy; letting it harden into an us-versus-them mentality is not. Security and engineering look at the same problems from different angles, but there is one goal: ship secure software. Learn each other’s stack, understand the roadmap, show up as a collaborator rather than a reviewer. The security team engineers want to call is more effective than the one they’re required to consult.
8. Know when to stand down — and when to push back
The willingness to say “network-layer isolation is sufficient here” or “this threat is acceptable risk” is what earns credibility for the fights that matter. Security maximalism destroys trust; knowing when to stand down builds it.
When you do push back, come with data — exploit likelihood, realistic impact, cost to fix. And maximize the context you hand to developers: a finding with a clear severity rationale, a realistic attack scenario, and a suggested remediation gets acted on. A bare vulnerability ID with no explanation gets triaged into a backlog and forgotten. The goal isn’t to be right — it’s to be useful.
9. Disagree and commit — deliberately
Sometimes a feature ships with known security gaps. That’s a business decision, and it’s often the right one. The security team’s job in that moment isn’t to block or to silently acquiesce — it’s to make the decision deliberate: agree on the minimal security bar, add basic compensating controls, document the residual risk, and put the remediation work on the roadmap. Ship it, then follow through. The danger isn’t shipping with known gaps — it’s shipping with undocumented gaps and no agreed plan to close them.
10. Scale through systems, not headcount
A small security team can’t review everything a hundred engineers build. Security scales through parallel tracks:
-
Enablement: templates, reference architectures, security champions, and training that make good security judgment transferable — so engineers make secure decisions without needing a security review at every turn.
-
Automation: SAST, dependency scanning, secrets detection, security gates in CI/CD that run on every PR, and most recently, LLMs.
-
Holistic remediation: when a vulnerability pattern surfaces in a functional area, drive an initiative to close the class — a shared library, a framework guardrail, a linting rule. Closing the class beats closing the tickets.
11. Security success is invisible — until it is a failure
Security’s value is counterfactual by design. You’re selling the absence of bad outcomes, which is invisible until it isn’t. The “we didn’t get hacked — why do we even need a security team?” question is a predictable trap. When things are quiet, there’s nothing visible to point to; when something goes wrong, the case for security makes itself — but at too high a cost.
The measurement gap is real — work around it. SLA compliance, MTTR, vulnerability age, findings caught pre-production are useful signals, but proxies for value, not proof of it. Tell the risk reduction story proactively: here’s what we found before it became a breach, here’s how the attack surface changed over the past year, here’s what we closed before a researcher or an attacker got there first.
Security that only shows up in the numbers after an incident has already lost the framing war — and ironically, that’s often when companies make their first Product Security hire.
Done well, product security is invisible: engineers ship without friction, teams collaborate without tension, and executives make informed decisions without needing a crisis to focus them. Getting there is a journey — but not an impossible one.