Tacit Knowledge and the Future of Product Security
AI coding agents are changing how we do product security. They solve real problems with how we work today, and the teams that figure out how to collaborate with them are already more effective than we’ve ever been. But this requires us to be honest about what changes and deliberate about where we focus next.
I manage two product security teams at GitHub. One builds security tooling, the other partners with engineering teams to help them ship secure features. For most of my career, this has meant working directly with people: teaching secure coding patterns, reviewing implementations, discussing tradeoffs. Some of that work is exactly what AI agents do well. The question I keep coming back to is what should we focus on instead?
What AI Agents Actually Fix
Watch a developer implement authentication or input validation or secret management. They’re supposed to remember the right patterns, use the security scanner’s output correctly, and navigate existing paved path infrastructure. But in practice, they’ve forgotten half the edge cases, the scanner’s warnings are too abstract to be useful, and the internal documentation for the approved tools is either outdated or assumes knowledge they don’t have.
The current approach expects people to maintain perfect attention and perfect knowledge across large codebases. It doesn’t work. AI agents can hold more context than any human, process security patterns without relying on memory, and generate concrete implementations instead of abstract warnings. They’ll read poorly documented source code and figure out how to apply it. They don’t get tired or frustrated.
That’s a real improvement. I can’t pretend it isn’t.
What Gets Lost
When I help a team ship a feature now, it’s collaborative. We discuss threat models, sketch out approaches. Someone raises an edge case I hadn’t considered. We disagree about whether something is a vulnerability or just a code smell. The tradeoffs between security and usability require judgment calls that draw on different perspectives.
This matters for the outcome, but it also matters for everything else. It’s how I learn what product teams actually care about. It’s how they learn to think about security as more than a checklist. When something urgent comes up later, we already have relationships and shared context.
An AI agent implementing the secure version of that feature asynchronously has better technical context than I could provide in a meeting. But it doesn’t build relationships or create shared understanding. It solves the immediate problem and nothing else.
The AI approach is more efficient and more consistent. But the informal knowledge transfer, the relationship building, the shared understanding of why things matter—we can’t afford to lose those. Understanding why requires looking at what each side actually brings to the work.
Why the Human Part Still Matters
AI agents need explicit, structured context to do their work: what the code does, what the requirements are, what the constraints are. Give them that, and they’re remarkably effective.
But security work often requires another kind of knowledge. Michael Polanyi calls this tacit knowledge. Why does this system exist in the first place? What’s been tried before and why did it fail? How does this fit into organizational goals? What are the political constraints? Who needs to be convinced? It’s the knowledge you can’t document because you don’t know you have it until someone asks the right question.
AI agents handle structured context better than humans ever could. We’re still better at the tacit kind. Tacit context is where product security teams need to double down. Not because it’s all we have left, but because it’s where the highest-leverage work has always been. We just couldn’t get to it because we were buried in implementation review.
What to Actually Do About This
Replace tactical touchpoints with strategic ones. When AI agents handle code review and implementation guidance, you lose the informal collaboration that built relationships and transferred knowledge. That wasn’t wasted time, so you need to replace it deliberately. But don’t just recreate code review in a different form. Shift earlier. Run threat modeling sessions before features are designed, hold office hours when teams are making architectural decisions, embed with product teams during planning cycles. The goal is influence when it matters most, not oversight when it’s too late.
Own the security properties of AI-generated code. Product teams are already using agents to write code. If your security team isn’t defining what “secure” means in that context, someone else is making those calls by default. Build the context documents that shape how agents think about security. Create verification workflows that catch what agents miss. Define the failure modes. You should be doing this now, not waiting until it becomes a crisis. The teams that build this expertise first will define the standards everyone else follows.
Make your paved paths actually good. AI agents will read your internal security libraries, your approved patterns, your documentation. They’ll use them correctly and consistently, which means they’ll amplify whatever’s wrong with them. If your authentication library has a footgun, agents will step on it at scale. If your secret management docs are unclear, agents will interpret them literally. Fix your infrastructure or watch agents multiply your technical debt.
Where This Is Already Heading
The shift isn’t three to five years out. It’s happening now, and the evidence is in what’s already breaking.
At GitHub, product teams use Copilot to generate entire features, including the security-sensitive parts. The agents are good enough that the code often passes automated checks. The reviews that add the most value now are the ones that happen before any code gets written.
This matches what I’m hearing from security leaders at other companies. The volume of code under review is increasing faster than teams can scale. The tactical review work is already overwhelming, which means teams are either becoming bottlenecks or they’re letting things through. Neither option is sustainable. The teams that are adapting are moving upstream. They’re investing in threat modeling capacity, building security properties into design systems, and creating automated verification that works at the volume AI agents produce.
The goal is to make product security teams the experts on secure software development in an agentic world, not just the people who review what agents produce. This means building the tooling that shapes how agents write code, creating the verification systems that work at scale, and cultivating the organizational knowledge that helps teams make the right security tradeoffs before implementation. This is what differentiates the effective security teams from the ones getting routed around right now.
If your team is still primarily doing code review and implementation guidance, you have maybe 12 months before that work is mostly automated. The question is whether you’re building the capabilities that matter next, or waiting for someone else to define what product security becomes.
The Trade We’re Making
I’m going to miss some of what we’re leaving behind. The back-and-forth of working through a problem with another person. The moment when someone understands why a security control matters because we figured it out together. That kind of collaboration takes time, isn’t efficient, and I can see a future where it gets optimized away entirely. But it’s not gone yet, and while it’s still here, it matters.
The teams that create intentional touchpoints with product teams now will be more effective than the ones that optimize purely for efficiency. Not because efficiency is bad, but because security teams without relationships and shared context become isolated gatekeepers who get routed around when they’re inconvenient.
AI agents are helping us build more secure software, and product security teams need to embrace that while being deliberate about what we preserve.
The ones that wait for the playbook to emerge or optimize away all human interaction will wake up disconnected from the people they’re supposed to help. The shift is happening now.