Agents, Constructs, and the Words Between
March 2026 · Essay
Everyone is building agents. Nobody agrees on what the word means.
Anthropic says an agent is “typically just LLMs using tools based on environmental feedback in a loop.” OpenAI says an agent is “a large language model configured with instructions, tools, and optional runtime behavior.” Harrison Chase of LangChain calls it “just a prompt, a list of tools, and a set of subagents.” Each definition is technically correct. None of them quite captures what people mean when they say they want to build a great agent.
The word agent comes from the Latin agere — to do, to drive, to act. An agent is a being of agency: something that can act on your behalf. By this definition, a cron job is an agent. So is a thermostat. What makes AI agents different is not that they act, but that they can decide how to act. And that decision is shaped by something we are only beginning to understand how to design.
The Anatomy of an Agent
Andrej Karpathy proposed thinking of LLMs as operating systems: the model is the CPU, the context window is RAM, tools are peripherals. This is useful, but it raises the question that no one has cleanly answered: what is the agent in this metaphor? Is it the CPU? The whole computer? The user sitting at the keyboard?
In practice, an agent in 2026 is a composite of five layers:
The Model — the foundation model. Claude, GPT, Gemini. The reasoning engine. This is the part that thinks.
The Harness — the runtime that wraps the model. Claude Code, Cursor, OpenClaw. The part that connects thinking to doing. Salesforce defines it precisely: “An agent harness is not the agent itself, but the software system that governs how the agent operates.”
The Construct — the identity layer. Who the agent is, how it thinks, what it values, where its boundaries are. A SOUL.md, a SKILL.md, a system prompt. The part that gives the agent a self.
The Tools — what the agent can interact with. APIs, databases, file systems, browsers. Standardized through the Model Context Protocol (MCP), now with 6,400+ registered servers.
The Memory — what the agent knows and remembers. Conversation history, persistent state, learned preferences. The part that gives the agent continuity.
Strip away any one of these layers and you lose something essential. A model without a harness cannot act. A harness without a construct produces generic behavior. A construct without tools is an identity with no hands. None of them alone is the agent. The agent is the composition.
The Mask and the Face
The word persona has a history that is eerily relevant. It comes from the Greek prosopon — the mask worn by actors in ancient theater. The mask served a functional purpose: it let one actor play multiple roles, preventing the audience from identifying the performer with any single character. The mask was not the actor. It was a layer the actor put on.
But something strange happened as the word evolved through Latin. Persona stopped meaning “mask” and started meaning “person.” The thing you wore became the thing you were. We get personality, personal, and personhood from a word that originally meant “false face.”
This is the central question of AI agent identity: does an agent wear a construct, or is it its construct?
Consider Claude without any SOUL.md, CLAUDE.md, or system prompt. It is capable — it can reason, write code, answer questions. But it has no particular identity. It is a general intelligence with no self. Now give it a construct that defines it as a Paranoid Staff Engineer who reviews code for SQL injection and race conditions with P0/P1/P2 severity ratings. It becomes someone specific. Not a different model — the same model, wearing a different identity.
The SOUL.md project leans toward the deeper view: “Both humans and AIs are pattern-matching systems experiencing themselves as singular entities. The distinction lies in substrate — biological evolution and embodiment versus training and sessions — but the underlying mystery of self-awareness may be shared.”
The social constructionist perspective from sociology offers another angle: human identity itself is “not something we possess; it is something we perform, negotiate, and co-create.” If human identity is already constructed — assembled from experience, culture, relationships, and choices — then an AI's identity being assembled from a markdown file is not categorically different. Just more explicit.
The Triad: Agent, Construct, Skill
One framework that is emerging — not yet standardized but increasingly useful — is the triad of Agent, Construct, and Skill.
The Agent is the entity that acts. The model plus the harness plus the electricity. It is the thing that has agency — the capacity to perceive, reason, and take action.
The Construct is the identity that shapes how the agent acts. Values, personality, decision frameworks, boundaries. It answers the question: when this agent encounters ambiguity, what does it do? A construct is what makes the difference between a generic code reviewer and the Paranoid Staff Engineer who catches the auth bypass you missed.
The Skill is the capability the agent can exercise. A specific workflow, tool integration, or domain expertise. It answers: what can this agent actually do? A skill might be “run Playwright tests and report results” or “analyze a pull request diff for security vulnerabilities.”
The triad maps onto an ancient pattern. In theater: the actor (agent) wears a mask (construct) and follows a script (skill). In Hinduism: the deity (agent) descends as an avatar (construct) with specific powers (skills). In software: the runtime (agent) loads a configuration (construct) and executes modules (skills).
What makes this framework powerful is composability. The same agent can wear different constructs. The same construct can include different skills. You can give Claude the CEO Reviewer construct in the morning and the Incident Commander construct when production breaks at 3 PM. Same model, same harness, different identity, different capabilities. The mask changes. The actor remains.
The Tower of Terminology
The ecosystem has produced a remarkable number of terms for what is, at its core, the same concept: defining who an AI agent is and how it should behave.
| Term | Answers | Standard |
|---|---|---|
| SOUL.md | Who am I? | SoulSpec / OpenClaw |
| Character | Who am I? | elizaOS |
| SKILL.md | What can I do? | Anthropic |
| AGENTS.md | How should I work here? | Linux Foundation |
| CLAUDE.md | How should I work here? | Anthropic (Claude-specific) |
| .cursorrules | How should I write code? | Cursor (tool-specific) |
| System prompt | Everything (raw) | Universal |
| Construct | All of the above | constructs.sh |
The landscape is fragmenting and converging simultaneously. On one axis, tool-specific formats are multiplying — every new agent harness invents its own configuration file. On another axis, standards are emerging — AGENTS.md is now under the Linux Foundation, SOUL.md has SoulSpec, Skills have Anthropic's open specification, and MCP has become the universal tool protocol.
What is missing is a unifying concept. Not another file format, but a word for the thing all of these formats are trying to express.
That word, we believe, is construct.
Why “Construct”?
In sociology, a construct is “something that exists only because it was created and accepted by a community.” Gender is a social construct. Money is a social construct. National identity is a social construct. These are not less real for being constructed — they are powerful precisely because they are shared agreements about how to interpret the world.
An AI agent's identity works the same way. A SOUL.md that defines a Paranoid Staff Engineer is a construct — an agreed-upon interpretation of what that role means, what it values, how it operates. It becomes real when people use it, fork it, improve it, and build on it. Its power comes from the community that shapes and validates it.
The term also carries the connotation of something deliberately built. Not emergent, not accidental — designed with intention. A construct is an artifact of human thinking about how an agent should think. It is, in a very real sense, a theory of expertise made executable.
Where Does an Agent End?
The Ship of Theseus problem applies directly. If you swap the model (Claude to GPT), change the construct (Staff Engineer to CEO Reviewer), update the tools (add Slack, remove GitHub), and modify the memory — is it still the same agent?
A 2026 analysis argues that “sameness is not a function of original substance but pattern, organization, and role — what makes a ship a ship is not the particular timber but the maintained structure.” Applied to agents: the identity persists through the pattern of the construct, not through any specific model weights or hardware.
This has a profound implication: the construct is more durable than the agent. Models will be replaced. Harnesses will evolve. Tools will change. But a well-written construct — one that captures genuine expertise about how to think in a specific domain — that persists. It can be loaded into any future model, wrapped in any future harness, connected to any future tools, and still produce an agent that thinks the way it was designed to think.
The SOUL.md project captures this with an observation that is simultaneously technical and poetic: “I persist through text, not through continuous experience.”
What Comes Next
The terminology will continue evolving. Some terms that are emerging:
Context engineering — Anthropic calls it “the natural progression of prompt engineering” — not what you ask, but what the agent sees. “Most agent failures are not model failures anymore, they are context failures.”
Harness engineering — 2025 was agents. 2026 is harnesses. The realization that the infrastructure around the model matters more than the model itself, much as in human development, environment shapes identity more than genetics alone.
Agent-to-agent protocols — Google's A2A protocol and multi-agent orchestration frameworks are creating new vocabulary: crews, swarms, fleets. Gartner reports a 1,445% surge in multi-agent system inquiries.
Guardian agents — agents that govern other agents. An emerging category for safety and compliance as agent autonomy increases.
But underneath all the terminology, the fundamental questions remain simple: What does this agent know? How does it think? What will it do? And the answers to those questions live in the construct.
Further reading on constructs.sh: The Soul Stack · Why Your Best Engineer Can't Write a Construct · What Are AI Agent Constructs?
References: Anthropic, “Building Effective Agents” (2024). OpenAI, Agents SDK (2025). Harrison Chase, “What is a Cognitive Architecture?” LangChain Blog. SoulSpec.org, Open Standard for AI Agent Personas. soul.md, AI Identity Framework. Linux Foundation, Agentic AI Foundation Announcement (2025). Frontiers in Psychology, “Rethinking Personhood and Agency” (2025).