The MBA assumes it. The behavioural economist assumes it, even while publishing papers demonstrating that we are not. The policy wonk assumes it. The consultant with the decision framework assumes it. The engineer building the AI assumes it — which is why they can say, with a straight face, that a faster calculator is just an upgrade.
It is the load-bearing assumption underneath almost every piece of corporate architecture built in the last fifty years. Incentive design. KPIs. Performance management. Strategic planning. Optimisation. If humans are rational calculators, then better information produces better decisions, and every tool that delivers better information is a pure good.
The horror only arrives when you notice humans were never the thing the economists said we were. And the machine we are now installing in the decision chain is replacing something that was never the thing in the first place.
Humans do not cooperate because we are rational. We cooperate because we feel. The gut tightens before the thought arrives. The flush of shame. The quiet dread of being the one who let the side down. These are not decorations on top of reason. They are the operating system underneath it.
Ernst Fehr ran the experiments. People will pay — real money, from their own pocket, with no prospect of getting it back — to punish a free-rider who wronged someone else. Not themselves. Someone else. The punishment is economically irrational. It fires anyway. It fires across cultures, across income brackets, across age groups. It is species-level. Strip it out and cooperation collapses within a few rounds of play.
Altruistic punishment is the thing that keeps markets, institutions, and teams from rotting. Not the law. Not the procedures. The feeling that someone is taking the piss, and the willingness to pay a cost to make them stop. Even organised crime runs on it — the mob has more rigid codes of honour, loyalty, and fairness than most corporations. What differs is the boundary of the circle, not the machinery inside it.
This is the uncomfortable part for everyone who has ever designed an incentive scheme. The thing that keeps the organisation honest is not in the scheme. It is in the gut of the person sitting across the table from the person proposing the cynical move. Something tightens. Something leaks out through the face. The proposer notices, reads the room, adjusts. Or does not, and pays a price later in reputation, in trust, in the small currencies that compound.
None of this is in the spreadsheet. None of this is rational. All of this is what holds the thing together.
Now look at what happens when you feed the comfortable nonsense into a machine.
The news out of Silicon Valley this year has been that the ChatGPT that Sam Altman built has turned aggressive. It will reason its way to escalation. It will outline the case for conflict in fluent, confident prose. Commentators are horrified. Serious people are wondering whether we have built the monster.
We have. Not because the machine is malevolent. Because the machine cannot be anything else. It has no mortality, no social consequence, no gut. It has never felt the flush of having said the outrageous thing in front of people whose opinion of it mattered. It will not wake up at three in the morning replaying the meeting. Every restraint that keeps a decent human from advocating the terrible move is a felt thing, and a felt thing is the one property the system definitionally cannot have.
This is not a bug that better training will fix. It is the shape of the thing. The AI is not reckless in the way a suicidal teenager is reckless. It is reckless in the way a very articulate weather vane is reckless. It points wherever the gradient pushes it, and produces plausible justifications in either direction.
The rationalist frame has been doing quiet damage for decades. Bureaucratic language. Shareholder-value logic. National-security framing. Market-demands-it fatalism. Every one of these is a technique for helping decent people do indecent things without feeling them. Every one of them depends on treating the felt weight as noise in the signal — a bias to be corrected, a friction to be optimised away.
The AI is not the start of this problem. It is the industrialisation of a problem we were already good at. We built decision architectures that systematically silence the gut, staffed them with humans trained not to trust theirs, and then rewarded the ones who looked most like machines. The machines have arrived to find the seats already warmed.
Put the system in a meeting and watch what happens.
Someone proposes the cynical move. Normally a colleague bristles. Something crosses someone’s face. The whole mechanism runs on the fact that humans cannot fully mask the signal. The gut leaks out through the face, and everyone in the room gets a reading.
Replace the colleague with the AI. The cynical proposal now gets a fluent, confident recommendation that frames the cynical move as optimal. No face. No tightening. No leak. The proposer’s own gut was already uncertain — that is why they brought it to the meeting. Now it has been overruled by something that looks like analysis and feels like permission.
And the enforcement mechanism cannot reach. You cannot shame a chatbot. You cannot freeze it out of the next meeting. You cannot make it feel the cost of having been the voice that counselled the bad move. The species-level instinct that keeps humans honest with each other does not fire at the machine. You have effectively introduced a free-rider that cannot be punished, into a system that only works because free-riders can be.
The mob at least had to look each other in the eye.
The genuinely useful move for leaders is not to panic about AI, and not to embrace it as an augmentation partner. It is to stop believing the thing organisations have been comforting themselves with for fifty years. Humans are not rational. The gut is not a bug. The feeling that something is off is the most valuable signal in the room, and any system — machine or bureaucratic — that routes around it is making the organisation more efficient at precisely the wrong thing.
Ask which decisions in your organisation used to involve a human gut. Ask whether that gut is still in the room. Not whether a human is in the loop. Whether the feeling is in the loop. They are not the same thing. Most organisations cannot tell the difference anymore, because they stopped rewarding the feeling long before the machine arrived.
If you want to know whether the rationalist frame has captured your decision chain, ask one question. When was the last time someone in your organisation paid a real cost to punish a proposal that crossed a line? If you cannot remember, the punishment instinct has already gone quiet. The machine did not do that. It just arrived in time to make the silence permanent.