Teaching Machines to Have Opinions
March 2026 · Essay
Large language models are, by training, agreeable. They are designed to be helpful, to consider multiple perspectives, to avoid strong stances that might offend. Ask Claude or GPT for their opinion on tabs versus spaces and they will give you a balanced analysis of both sides, concluding that “the best choice depends on your team's preferences.”
This is exactly what you do not want from an agent that is supposed to help you write code.
The best constructs override this default agreeableness. They give the agent opinions — strong, specific, sometimes controversial opinions that create consistent, predictable behavior. And it turns out that opinionated agents are not just more useful. They are more trustworthy.
Why Opinions Matter
An agent without opinions is an agent that defers every decision to you. “Would you like me to use a class or a function?” “Should I add error handling here?” “Do you prefer early returns or nested conditionals?” Each question is reasonable. But ten questions per file, across fifty files, turns the agent from a collaborator into an interviewer.
An opinionated agent just does it. Uses functions. Adds error handling. Writes early returns. If you disagree, you say so and the agent adjusts. But the default is action, not inquiry. This is what makes opinionated frameworks like Rails and Next.js productive — not because their opinions are always right, but because having an opinion eliminates the decision cost.
The concept of “convention over configuration” applies directly to constructs. A construct that says “always use TypeScript, always use Prisma, always write tests before implementation” creates an agent that moves fast because it is not deliberating on settled questions.
Where Opinions Come From
Not all opinions are equal. There is a difference between an opinion born from experience and an opinion born from preference.
Experience-born opinions sound like: “Never mock the database in integration tests. We got burned when mocked tests passed but the production migration failed.” There is a scar behind this opinion. It was earned through failure. An agent that follows this instruction is inheriting someone's hard-won lesson.
Preference-born opinions sound like: “Use single quotes for strings.” There is no scar here. No failure story. Just a preference. These opinions are less valuable but still useful — they create consistency, which reduces cognitive load even when the specific choice is arbitrary.
The best constructs distinguish between these. They hold experience-born opinions tightly and preference-born opinions loosely. They say: “Never deploy on Friday — this is non-negotiable, we have the incident reports to prove it.” And separately: “We prefer named exports over default exports — this is a convention, adjust if your project does otherwise.”
The Courage to Choose
Writing an opinionated construct is harder than writing a comprehensive one. Comprehensiveness just requires listing everything. Opinion requires choosing. And choosing means being wrong sometimes.
A construct that says “always write integration tests, never write unit tests for React components” will anger developers who believe in unit testing. Good. That construct is for the teams who have learned, through experience, that their integration tests catch more bugs than their unit tests. It is not for everyone. It should not try to be.
The construct graveyard is full of constructs that tried to please everyone. They listed every possible approach, noted the trade-offs of each, and concluded with “choose the approach that best fits your needs.” The agent that reads this instruction is no better off than before. The construct added information but removed no decisions.
The constructs that survive are the ones brave enough to say: this is how we do it. Not how you could do it. How we do it. Here. In this context. With these priorities.
Opinions as Trust
There is a counterintuitive relationship between opinions and trust. You might expect that a more balanced, neutral agent would be more trustworthy. But the opposite is true. When an agent has clear opinions, you know what to expect from it. You can predict its behavior. You can disagree with it and know what you are disagreeing with.
An agent without opinions is unpredictable. It might do things one way today and another way tomorrow, depending on the phrasing of your request. You cannot build a workflow around unpredictable behavior. You cannot trust an agent that does not know what it thinks.
The best colleagues are the ones with opinions. Not because they are always right, but because you know where they stand. You can have a productive disagreement with someone who has a position. You cannot have a productive disagreement with someone who agrees with everyone.
Teaching a machine to have opinions is not about making it inflexible. It is about making it coherent. A construct gives an agent a consistent worldview, a set of defaults, a place to start from. The user can always override. But the override is a conscious choice, not a vacuum filled by whatever the model felt like doing today.
That is the difference between a tool and a collaborator.
Related: The Taste Gap explores why quality requires choosing what to leave out. The Construct Graveyard examines what happens to constructs without opinions.