The Safety Leak | Article 4: The Feedback Risk
I used to believe that “Radical Candor” was the silver bullet for engineering performance. I told my team to “be direct,” “don’t sugarcoat it,” and “challenge directly.” I thought I was building a culture of high standards.
I was actually building a culture of High Social Risk.
I’ve realized that when I ask for “brutal honesty” without first establishing a massive buffer of safety, it usually just ends up being brutal. In an engineering context, where many of us are already prone to being overly analytical or blunt, “direct feedback” can easily become a weapon used to win arguments rather than a tool to improve code.
The Weaponization of Critique
In a low-safety environment, feedback isn’t seen as data; it’s seen as a threat to status.
If I tear apart a junior dev’s PR in a public Slack channel under the guise of “maintaining quality,” I am not teaching them. I am humiliating them.
- The Intention: I want to fix the bug.
- The Result: The dev learns that showing their work is dangerous. Next time, they’ll spend three extra days “polishing” (hiding) their code before asking for a review, or they’ll stop taking risks altogether.
Feedback is like a high-voltage current. If the “wiring” (the relationship) isn’t grounded in safety, the system just shorts out.
The “Correctness” Trap
I have a tendency—one I see in many senior leads—to prioritize being right over being helpful.
When I critique a design, am I doing it to make the product better, or am I doing it to show off my own architectural depth? If I am “correct” but I destroy the engineer’s confidence in the process, I’ve lost. I’ve fixed a local bug but introduced a global system leak.
I’ve noticed that when feedback feels like an attack, the recipient’s brain moves into defensive mode. They stop listening to the technical logic and start planning their counter-attack. The learning stops immediately.
Why “Nice” Isn’t the Answer
I want to be clear: the solution isn’t to be “nice” or to stop giving feedback. That’s just another kind of leak. “Ruinous Empathy” is just as bad as “Obnoxious Aggression.”
If I see a flaw and I don’t mention it because I don’t want to hurt feelings, the code fails. The goal is High Challenge + High Safety. I want my team to be able to say, “This code is a mess,” without the author hearing, “You are a bad engineer.”
How I Patch the Feedback Leak
I have to change the environment so that critique is seen as a gift to the system, not a judgment of the person.
- I critique the “Artifact,” not the “Author”: I never say “You did this wrong.” I say “This implementation has a bottleneck at X.” It’s a subtle shift from “Who” to “What,” but it changes the biological response.
- I keep the “Praise-to-Critique” Ratio high: If the only time an engineer hears from me is when they’ve messed up, they will eventually see my name on a notification as a threat. I have to build the “safety buffer” during the quiet times.
- I ask for permission: Before dropping a heavy architectural critique, I ask: “I have some thoughts on the scalability here; are you in a place to walk through them now?” It gives the engineer a sense of agency and lowers the “threat” response.
The Diagnostic
I look at the last few PR reviews or 1:1s I conducted:
- Was the recipient defensive? (If yes, I likely failed to ground the feedback in safety).
- Did the feedback result in a better system, or just a discouraged human?
- Do people come to me early with their half-baked ideas?
If people only show me “finished” work, it’s because the risk of getting feedback during the process is too high. I have a leak.