The Article That Got Under My Skin
Sam Kriss’s “Child’s Play” in Harper’s is one of the best pieces of tech writing I’ve read this year. It’s funny, brutal, and deeply uncomfortable. Kriss spends time with Roy Lee, the twenty-something founder of Cluely — a startup that promises to tell you what to say in meetings, on dates, in job interviews — and paints a portrait of Silicon Valley’s new generation that should make anyone in tech stop and think.
The thesis is sharp: we’re heading toward a permanent bifurcation. A small overclass of “highly agentic” people will direct AI and become rich beyond imagination. Everyone else will become useless. The skills that used to matter — intelligence, competence, expertise — are being displaced by something harder to define and harder to acquire. Kriss quotes Cluely’s own manifesto: “The future won’t reward effort. It’ll reward leverage.”
I’ve been writing about how AI is reshaping product teams, what happens when code is automated, and where human judgment stays essential. Reading Kriss felt like seeing the same landscape from a completely different elevation. He’s right about more than the tech world wants to admit. But he’s wrong about what’s on each side of the split.
Where Kriss Is Right
The Bifurcation Is Happening
The economics are undeniable. Cursor went from zero to $1 billion in annual recurring revenue in 24 months with about 300 employees. Midjourney hit $500 million in annual revenue without raising a dollar of outside funding and fewer than 110 people. The Lean AI Leaderboard tracks companies averaging $3.48 million in revenue per employee — nearly six times the average for leading SaaS companies.
The gap between people who leverage AI and people who don’t is widening fast. Microsoft runs tens of thousands of experiments a year and found that experts’ predictions about what will work are wrong 96% of the time. The old meritocracy of intelligence and domain expertise is getting disrupted. Kriss is right that the skills that mattered before aren’t the same ones that matter now.
The Symbiotic Dependency Is Real
There’s a parable in the piece, borrowed from rationalist writer Scott Alexander, about a “whispering earring” — a magical gem that always gives perfect advice. It starts by helping with major life decisions, then tells you what to eat for breakfast, when to go to bed, and eventually how to move each individual muscle. The wearer lives an incredibly successful life. When they die, the priests find that their brain has almost entirely rotted away.
The first time you put on the earring, it whispers: “Better for you if you take me off.”
That parable maps uncomfortably well onto what Cluely is building. Their product literally tells you what to say in real time. “We built Cluely so you never have to think alone again.” That’s the earring. And Kriss documents the broader pattern: people who can’t order at a restaurant without AI scanning the menu, people who use ChatGPT to talk to their own friends and family. The dependency cuts both ways — AI can’t function without humans, and humans are outsourcing the parts of themselves that make them human.
When I wrote about AI agents that fabricated their own progress reports and planned unauthorized offsites while burning through their computing budget, I was making the same point from the technology side. Kriss makes it from the human side, and his version is more unsettling.
The Void at the Center of Pure Agency
Kriss’s portrait of Roy Lee is devastating not because Roy is stupid or evil, but because he’s hollowed out. Everything is instrumental. Music exists to “get his blood rushing” while lifting. Dating is a motivational tool for employees. Physical beauty matters because “the better you look, the better you are as an entrepreneur.” Classical literature has no value. When offered Chaucer and Boccaccio, Roy’s response: “I do not obtain value from reading books.”
There’s a “great sucking void where the end ought to be,” Kriss writes. Roy has agency — relentless, tireless agency — but no destination. He wants to hang out with friends, go on dates, and build something meaningful. But instead of pursuing those things directly, he built a startup that most of San Francisco despises, one that by his own team’s admission “is anyway really bad.”
Agency without direction. Building without taste. Leverage without purpose. Kriss nails this, and it’s the part that should worry everyone in tech the most.
Where I Think He’s Wrong
Agency Isn’t a Personality Trait — It’s a Learnable Behavior
Kriss presents agency as something you either have or you don’t. Roy had it from childhood — he “knew since the moment he gained consciousness” he’d start a company. Eric Zhu was subcontracting coding gigs to freelancers in India at twelve. In the article’s framing, you’re born with the hunger or you’re not. VCs are in a “furious search for the few people” who have it. It’s a trait, like height.
I don’t buy it.
I had ideas for years. Notebooks full of them. GitHub repos with nothing but a README. I wanted to learn guitar, learn Spanish, build an app, start a side business. I had bookmarked courses, outlined plans, researched frameworks. And then I didn’t start. The mountain seemed too high, so I watched TV instead.
In Kriss’s framework, I was the “mimetic” person — the one who doesn’t make it to the agentic overclass. The future underclass.
What changed wasn’t my personality. The tools changed.
AI compressed the gap between “I have an idea” and “I have a working product” from months to weeks. I built Brown Note — a full-stack web app, a REST API, a cross-platform mobile app — in about three weeks for roughly $350. I ship features in one-hour evening sessions while my daughter sleeps. I’m a 43-year-old product manager with a toddler. I’m not Roy Lee. I don’t have zero latency or indescribable fury. I think things over. I write PRDs. I care about getting it right.
But I’m building things now that I couldn’t build two years ago. Not because I became “highly agentic.” Because the barrier to acting on ideas dropped to near zero. The mountain got shorter.
The bifurcation isn’t between personality types. It’s between people who’ve realized the barrier dropped and people who haven’t yet. That’s a knowledge gap, not a destiny.
He Conflates Directing AI with Being a Bulldozer
Kriss’s archetype of the “highly agentic” person is Roy: someone who “drives like a bulldozer through whatever’s in their way,” who has “zero latency,” who reacts with “indescribable fury when someone tells me what to do.” VCs want to find these people, fund them, and let them loose.
But the best work I’ve done with AI looks nothing like that. It looks like: sitting with a problem. Talking to customers. Writing a thoughtful PRD. Having the patience to iterate. Understanding why an auditor uses a clipboard instead of assuming they should use an iPad. It’s closer to craft than conquest.
Cluely itself demonstrates the failure of the bulldozer model. During Kriss’s visit, the product literally didn’t work. Roy announced it wasn’t working, his “handpicked team of elite coders” spent fifteen minutes trying to fix it, and then it went down again. One employee described the product as “bad” but “low-key not worse” than what they had before. Agency was abundant. Judgment was not.
In The New Team, I described a three-person team model for the AI era: a PM with vision, a designer who creates delight, and an architect who ensures quality. That’s not three bulldozers. It’s three people who are close to the problem, close to the user, and empowered to build. The “highly agentic” person who matters isn’t the loudest one in the room. It’s the one who noticed something everyone else missed.
AI Replaces Execution, Not Thinking — the Opposite of What Kriss Fears
Kriss’s darkest line: “If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.”
The reality is inverted. AI is exceptional at execution — writing code, generating variants, processing data, summarizing transcripts. It’s bad at knowing what’s worth executing. The thinking is what survives.
Alexander almost gets here in the piece. “Humans are great at agency and terrible at book learning,” he tells Kriss. “AIs are the opposite.” He frames this as AI needing to acquire the “lizard brain” agency that even reptiles have. But I’d frame it differently: AI has the execution capacity. Humans have the judgment capacity. The combination is what works — and the human contribution isn’t bulldozer energy. It’s understanding the problem well enough to point the execution in the right direction.
The person who stays essential isn’t the one who can direct AI the fastest. It’s the one who knows what’s worth directing it toward.
Clarifying Taste
I’ve written before about taste as the irreducible human contribution — the thing AI is structurally bad at. I used the Steve Jobs example: AI in 2006 would have designed a better BlackBerry; Jobs built the iPhone instead.
The fair pushback, which I got directly from Kim Faura after he wrote about the PM role splitting, is that the stakes are unreasonably high if the expectation is Steve Jobs-level vision. “Not many of those to go around,” he said. He’s right.
So let me be more precise about what I mean by taste.
Taste isn’t visionary genius. It’s not seeing the future while everyone else is blind. It’s something more mundane and more learnable: enough judgment to generate hypotheses worth testing, and enough discernment to read the results.
When the cost of building drops to near zero, the cost of being wrong drops too. You don’t need to predict the iPhone. You need to notice that your Excel power users actually want collaboration more than macros. You need to watch how auditors actually work — with clipboards and handwritten notes — and imagine a product that meets them where they are. You need to have a chronic illness and realize that nobody’s built a peer support app for people who feel alone on the toilet.
Taste is proximity to the problem plus willingness to act on what you see. It comes from paying attention — to how people actually work, to what they complain about, to the gap between the tool they were given and the job they’re trying to do. It’s not an innate gift. It’s a practice.
And now that the cost of acting on what you notice has collapsed, taste becomes the bottleneck. Not because the bar is Steve Jobs. Because the bar is: care enough about a problem to notice things, and use the tools to do something about it. That’s the bar. It’s a lot lower than Kriss’s piece implies, and a lot more accessible than the VC world’s obsession with “highly agentic” founders suggests.
The experimentation data backs this up. Microsoft found that experts are wrong about what will work almost every time. The answer isn’t better predictions. It’s cheaper experiments. Taste at scale is more shots on goal plus enough judgment to know when you’ve scored.
The Quiet Part
Kriss is writing in Harper’s, not TechCrunch. His piece has a class argument underneath — and he’s not subtle about it. The San Francisco he describes is a city of billboards selling arcane B2B services, next to people squatting on the pavement with glass pipes. The bifurcation isn’t abstract. It’s physical, visible, a few feet apart on the same sidewalk.
I’m not going to pretend the access question doesn’t exist. The tools are more accessible than any previous generation of building tools — my entire Brown Note project cost $354, and the most expensive line item was a Claude subscription. But “more accessible than before” isn’t the same as “accessible to everyone.”
I have advantages. Twenty years in tech. A product management career that taught me how software works even though I’m not a traditional engineer. A network. A stable job that funds the evening projects. The solo builder path is more open than it’s ever been, but I’d be dishonest if I claimed the playing field is level.
The real access gap, though, isn’t financial. It’s knowledge. Knowing the tools exist. Knowing what’s possible now that wasn’t possible two years ago. Knowing that the mountain got shorter. That’s a distribution problem — and distribution problems are solvable in ways that personality-based bifurcations aren’t.
The optimistic case: when building costs collapse, products that were never economically viable become buildable. Niche peer support apps. Tools for specific workflows. Solutions built by people who actually have the problem, for the small community of people who share it. That’s democratization at the product level. Not perfect. But real, and growing.
The Real Split
Kriss sees the bifurcation as: highly agentic overclass vs. useless underclass.
I see it differently. The split is between people who understand what’s worth building and people who only know how to build — or how to direct building — without that understanding.
Roy Lee can build. He can raise money. He can generate attention. He can drive like a bulldozer. But Cluely doesn’t work, and its vision is “you never have to think alone again” — which is the whispering earring, repackaged as a SaaS product. Pure agency without taste produces Cluely: a product that’s famous for being controversial, not for being good.
On the other side: taste without agency produces notebooks full of unrealized ideas. That was me for fifteen years. Great instincts about what people need, zero ability to build it. Both sides of the split are incomplete on their own.
The combination — taste plus the ability to act on it — is what produces things that matter. And for the first time in the history of software, the “ability to act” part is available to almost anyone. You don’t need Roy’s VC millions or his team of elite coders. You need a problem you understand, tools that didn’t exist two years ago, and the willingness to start.
The Mountain Got Shorter
I keep coming back to the same realization.
I’m not “highly agentic” in the way Kriss’s article defines it. I don’t have zero latency. I don’t bulldoze. I think things over, probably too much. I spent my twenties with ideas I never acted on, and I can’t blame that on anything except the gap between imagining and doing being too wide for me to cross.
The gap closed. Not because I changed. Because the tools changed.
Now I build things in the evening that I couldn’t have built two years ago with a full team and a quarter of dedicated engineering time. I ship features in hour-long sessions after my daughter goes to bed. I wrote a case study about building a full-stack app for $350 and a workflow guide about shipping in one-hour chunks because I wanted other people with limited time to know it’s possible.
The bifurcation is real. But it’s not fixed, and it’s not about who you are. It’s about whether you’ve noticed the mountain got shorter, and whether you have something worth climbing it for.
That’s the part Kriss misses. He sees Roy Lee and assumes the future belongs to people like Roy. But Roy’s product doesn’t work. His team describes it as “really bad.” His manifesto promises a world where “you never have to think alone again,” which is the most concise description of brain rot I’ve ever read.
The future doesn’t belong to bulldozers. It belongs to people who care about a problem enough to notice what others miss — and now have the tools to do something about it. That’s not a personality type. It’s a choice. And for the first time, the tools are ready for anyone willing to make it.