An AI agent needs more than permission to act. It needs an identity that other systems can recognize, constrain, log, and revoke. Without that identity, delegated work tends to hide inside someone else’s account, a shared automation token, or a vague label in a dashboard. The agent may appear useful for a while, but when a record changes, a message is sent, or a tool fails, the organization is left asking a basic question: who did this?
Agent identity is the answer the system gives before it decides whether an action is allowed. It is the name on the request, the service account behind the token, the actor in the audit log, and the boundary that separates a human’s authority from a delegate’s authority. AI Agent Permissions explains the ladder from reading to acting. Identity is the name engraved on each rung of that ladder.

This distinction matters because agents are often introduced through convenience. A team connects a tool, pastes an API key, grants a broad workspace role, and starts testing. That can be reasonable in a sandbox. It becomes fragile when the same pattern reaches production. A human account was designed around a person who can be contacted, managed, offboarded, and held responsible. An agent is a delegated system. It needs a narrower, more explicit shape.
Identity is not the same as access
Access answers what an agent can do. Identity answers who the system believes is asking. Those questions are related, but they should not collapse into one another. A database role, a ticketing account, a calendar grant, and a repository token may all allow action, yet none of them automatically explain whether the actor is a person, a scheduled job, a support agent, a coding agent, or a review assistant acting on behalf of a person.
When identity and access are mixed together, teams often end up with two weak patterns. In the first, the agent uses a human’s account. The audit log shows Alice edited a document, even though Alice delegated the edit and may not have seen the exact change until later. In the second, many agents share one automation account. The log says automation changed a record, but it does not say which workflow, model, prompt version, tool, or approval path led to the change. Both patterns make debugging and accountability harder than they need to be.
A better design gives the agent its own identity and then binds that identity to a scope. The account might be called a service account, app identity, workload identity, bot user, or integration identity depending on the system. The label matters less than the behavior. The identity should be distinguishable from humans, tied to a specific workflow or class of workflows, granted only the access it needs, and visible in logs wherever it acts.
This does not remove human responsibility. It clarifies it. A support lead may authorize an agent to draft replies. An engineer may approve a coding agent’s pull request. A manager may allow a scheduling delegate to propose calendar changes. The agent identity records the delegated action while the workflow records the human authorization around it.
Shared human accounts create false evidence
Using a human account for an agent can feel harmless because the account already has the right permissions. It avoids a configuration step and lets the agent see the same interface the person sees. The cost appears later, when the record of work becomes misleading.
Imagine a sales operations agent updating account notes after a call. If it runs as the account owner, every change looks personal. A teammate reviewing the history cannot easily tell which notes were typed by the owner, which were pasted from a meeting summary, and which were produced by a delegated workflow. If the agent misclassifies a field, the trail points to the person, but the real fix may belong in the agent’s source grounding, tool contract, or approval path.
The same problem appears in software work. If a coding agent pushes under an engineer’s credentials, the repository may show a familiar author, but the operational story is incomplete. Did the engineer write the patch, supervise it, or merely start a run? Which tests did the agent run? Was there a checkpoint before the commit? Was the branch created by the agent or by the person? AI Agent Observability is much easier when the actor in the system matches the actor in the trace.
Shared accounts also complicate offboarding. If a person leaves a team, their credentials should stop working. If an agent depends on that account, the organization faces an awkward choice between breaking the workflow and keeping a departed person’s access alive. A separate agent identity avoids that trap. The human can be removed without confusing the delegate’s service credentials, and the delegate can be disabled without disrupting the person’s ordinary account.
Service accounts should be narrow enough to name
Creating an agent service account is only the first step. A single broad account called ai-agent or automation-admin is barely better than a shared token if it becomes the doorway for every workflow. The name should tell reviewers what kind of work the identity exists to perform.
A useful identity has a clear job. A research summarizer that reads approved policy documents does not need the same identity as an agent that updates customer records. A coding assistant that opens draft pull requests does not need the same identity as a release helper that can promote a build. A finance reconciliation agent, if one exists, should not share credentials with a marketing content agent simply because both use the same model provider. The identity should be sized to the blast radius of the task.
This is where identity design meets AI Agent Tool Contracts . A narrow tool and a narrow identity reinforce each other. The tool defines the shape of the action. The identity defines who is allowed to call it and how the action will appear afterward. If a tool can update a customer address, the agent identity using it should make that workflow visible in logs. If a tool can only prepare a proposed update, the identity should not quietly have permission to perform the final update through another path.
Naming is not cosmetic. A clear identity name helps humans notice drift. If an account named agent-policy-reader begins writing records, the mismatch is obvious. If an account named automation has broad access everywhere, drift can hide for months. Good names create friction in the right place because they make strange behavior look strange.
Credentials are operational assets
The credential behind an agent identity may be an API key, OAuth token, workload certificate, short-lived access token, SSH key, database password, or delegated app grant. Whatever the form, it should be treated as an operational asset rather than a convenience string pasted into a prompt or environment field.
The first rule is separation. Secrets that let the agent act should not be visible to the model as ordinary context. The agent may need to call a tool that uses a credential, but it usually does not need to read the credential itself. A tool can hold the secret behind a boundary, accept structured inputs, perform the action, and return an inspectable result. That keeps credentials out of transcripts, memory, logs, and copied context.
The second rule is rotation. Long-lived credentials are easy to forget because they work quietly until they do not. A mature agent workflow should have a way to replace credentials without rewriting the whole system. Rotation also reveals hidden coupling. If changing one token breaks five unrelated delegates, the identity design is probably too broad.
The third rule is revocation. A credential should be easy to disable when a workflow is paused, retired, compromised, or under investigation. Revocation is not only a security response. It is an operating control. If an agent begins behaving unexpectedly, the team should be able to stop its access while preserving evidence about what happened. That connection to AI Agent Incident Response is direct: stopping the actor cleanly is easier when the actor has its own credential.
Delegation needs attribution, not impersonation
Some workflows require acting on behalf of a person. A personal agent may draft calendar changes for its owner. An enterprise agent may prepare a document under a team member’s direction. A customer support agent may propose a response for a human reviewer. The tempting design is impersonation: let the agent become the person for the duration of the task.
Impersonation should be handled carefully because it can erase the difference between human action and delegated action. A better pattern is attribution. The system can record that the agent acted for Alice, with Alice’s approval, under a specific workflow, at a specific time, using a specific tool. The visible result may still carry Alice’s ownership where the business process requires it, but the operational trail should preserve the delegate’s role.
This is not only for audits. It helps everyday collaboration. If a teammate sees a draft created by an agent on behalf of a project lead, they can decide how much review it needs. If a message was sent automatically after an explicit approval, the team can distinguish it from a manually written note. If a record was changed by an agent under a runbook, the next person can inspect the evidence instead of asking around for tribal context.
Attribution also protects agents from being blamed vaguely. When every delegated action is described as the agent did it, failures become hard to analyze. The useful question is which agent identity acted under which human authorization, using which credential, through which tool, against which resource. That is a sentence a system can answer if identity is designed well.
Identity belongs in the review surface
Agent identity should not be hidden in backend configuration. Reviewers need to see enough of it to understand the consequence of approval. If a person is asked to approve a file change, send a message, update a record, or grant a new scope, the interface should show which identity will perform the action. The reviewer does not need every credential detail, but they do need to know whether the actor is read-only, write-capable, production-facing, or acting through a sensitive integration.
This visibility changes behavior. A reviewer may be comfortable letting a documentation agent update a draft branch but not a production publishing identity. A manager may approve a scheduling proposal but pause if the agent is about to use an account that can send external invitations. An engineer may accept a tool run in staging and reject the same run in production. Identity makes those distinctions legible before the action happens.
The same visibility helps after the action. When a trace shows the agent identity, credential boundary, tool call, approval, and resulting artifact, review becomes concrete. The person does not have to infer authority from a polished summary. They can see the chain that connected delegation to action.
The quiet test is revocation
A useful way to judge an agent identity design is to ask what happens when the workflow must stop. Can the agent be disabled without locking out a human? Can one workflow be paused without breaking unrelated workflows? Can a credential be rotated without losing the historical record of past actions? Can a reviewer tell which artifacts were created before and after the change? Can a team investigate a suspicious run without leaving the same actor active in production?
If the answer is no, the identity is probably doing too much. It may be shared across workflows, hidden inside a human account, granted broad scopes, or stored in places that are hard to inspect. None of those choices means the system is doomed. They mean the delegation boundary is still informal.
The mature design is ordinary and somewhat boring. Agents have distinct identities. Credentials are scoped, stored behind tools, rotated, and revocable. Human authorization is recorded without pretending the agent was the human. Logs show the actor plainly. Review surfaces expose the identity when consequence matters. Incidents can stop one delegate without tearing down the whole environment.
That is the practical value of agent identity. It turns a capable model with tools into a participant that systems can recognize. Not a person, not a magic worker, and not an anonymous script, but a bounded delegate whose work can be permitted, reviewed, explained, and stopped.


