Why the word matters
In 2024, “agentic” went from an academic adjective to the most overused word in enterprise software. Every product that once called itself “AI-powered” now claims to be agentic. The result is predictable: buyers have no idea what the word means, and vendors have no incentive to clarify it.
This article fixes that. We will define the term precisely, place it on a spectrum of automation capabilities, and give you a five-question test you can bring into your next vendor evaluation.
The four properties of an agentic system
Computer science researchers – most notably Andrew Ng and the Stanford HAI group – converge on four properties that distinguish an agentic system from every other kind of software automation. A system must exhibit all four to earn the label.
| Property | What it means | Example in HR |
|---|---|---|
| Perceive | The system monitors its environment continuously, not just when a user opens a screen. | Detecting that a critical role has been open for 45 days and internal candidates exist. |
| Reason | The system evaluates options against a goal, weighing trade-offs rather than following a fixed script. | Ranking three internal candidates by readiness, flight risk, and development cost. |
| Act | The system takes action across one or more systems of record – not just surfacing a recommendation. | Drafting a personalized outreach message, scheduling a conversation, and updating the ATS. |
| Learn | The system observes the outcome of its actions and adjusts its future behavior. | Noting that candidates who received a manager endorsement were 3x more likely to accept, then prioritizing endorsement in future outreach. |
Remove any one of these properties and you have something useful – but not agentic. A dashboard perceives but does not act. A workflow engine acts but does not reason. A chatbot reasons within a session but does not learn across sessions.
The autonomy spectrum
It helps to place agentic on a spectrum rather than treating it as a binary label. Think of four levels, each building on the one before it.
| Level | Label | User role | System role | Memory |
|---|---|---|---|---|
| 1 | Search | Types a query | Returns a ranked list | None |
| 2 | Chatbot | Asks a question | Generates a natural-language answer | Within session only |
| 3 | Copilot | Prompts with context | Drafts content, suggests next steps | Within session, sometimes within user |
| 4 | Agent | Sets a goal, reviews outcomes | Perceives, reasons, acts, learns | Persistent across sessions and users |
Most products on the market today sit at Level 2 or Level 3. There is nothing wrong with that – a well-built copilot delivers real value. The problem arises when a Level 2 product is marketed as Level 4, because buyers make architectural decisions based on capabilities that do not exist.
How to place a product on the spectrum
Placing a product on the spectrum is not always straightforward, because marketing language deliberately blurs the boundaries. Here is a practical guide. Start by asking what happens when no user is logged in. If the answer is “nothing” – no monitoring, no action, no progress on any workflow – the product is at Level 3 or below, regardless of how sophisticated its in-session behavior is.
Next, look at the system’s relationship with time. A Level 1 or Level 2 product has no concept of time at all – each interaction is independent. A Level 3 product may remember previous sessions but does not act between them. A Level 4 product treats time as a first-class dimension: it tracks deadlines, detects trends, escalates urgency, and sequences actions across days or weeks without human prompting.
Finally, examine the data flow. In Levels 1 through 3, data flows in one direction: from the system to the user in the form of answers or suggestions. At Level 4, data flows in a loop: the system acts, observes what happens, and feeds that observation back into its decision-making. This closed loop is the clearest architectural marker of an agentic system.
Why the distinction is not academic
Consider a concrete scenario. Priya, a CHRO, wants to reduce time-to-fill for critical engineering roles. Here is what each level of the spectrum actually delivers:
- Search: Priya types “internal candidates for senior backend engineer” and gets a list of names sorted by keyword match.
- Chatbot: Priya asks “Who are our strongest internal candidates for this role?” and gets a paragraph with three names and brief explanations.
- Copilot: Priya opens a requisition, and the copilot suggests five internal candidates with fit scores, drafts outreach messages, and flags skill gaps – but only while Priya is looking at the screen.
- Agent: The system notices the role has been open for 15 days, identifies seven internal candidates across two business units, checks their managers’ capacity, drafts differentiated outreach for each, sends the messages after manager approval, tracks response rates, and adjusts its approach for the next open role – all before Priya logs in on Monday morning.
The gap between Level 3 and Level 4 is not incremental. It is structural. The copilot waits for Priya. The agent works on Priya’s behalf.
The five-question vendor litmus test
When a vendor tells you their product is agentic, ask these five questions. Each maps to one of the four properties, plus a fifth question about trust architecture.
| # | Question | What you are testing | Red-flag answer |
|---|---|---|---|
| 1 | Does your system take action when no user is logged in? | Perceive + Act | “It surfaces recommendations in a dashboard.” |
| 2 | Can you show me a case where the system chose between two valid options and explain why? | Reason | “It follows the workflow you configure.” |
| 3 | How does the system behave differently today than it did six months ago, based on outcomes? | Learn | “We release model updates quarterly.” |
| 4 | How many systems of record does the agent write back to in a single workflow? | Act (cross-system) | “It integrates via API but the user confirms each step.” |
| 5 | Who approves actions before they execute, and how granular is that control? | Trust architecture | “Everything is fully autonomous” or “The user approves every action.” |
A strong answer to question five is nuanced: the system should be capable of acting autonomously, but the organization should be able to set approval boundaries that match its risk tolerance. Fully autonomous with no guardrails is as much a red flag as fully manual with an AI label.
Use these questions as a scorecard. A vendor that answers all five convincingly – with concrete examples, not abstractions – is likely operating at Level 4. A vendor that stumbles on questions one and three is almost certainly at Level 3 or below, regardless of their slide deck. Do not accept “on the roadmap” as a current capability. Architectural constraints do not disappear with a quarterly release.
Where Gloat sits on the spectrum
Gloat’s Agentic Talent Intelligence Platform is designed around all four properties. Agents within the platform continuously monitor workforce signals (perceive), evaluate talent decisions against organizational goals (reason), execute multi-step workflows across integrated systems (act), and refine their models based on realized outcomes (learn).
But the more important point is not where any single vendor sits – it is that you now have a framework to evaluate every vendor consistently. The four properties and the five questions work regardless of whose logo is on the slide.
Common misconceptions
Before we close, three misconceptions worth naming:
- “Agentic means no human in the loop.” False. Agentic means the system can act, not that it always should. The best agentic systems have configurable human-in-the-loop controls. The human role shifts from initiator to governor – setting boundaries, reviewing outcomes, and adjusting goals rather than triggering every action.
- “Agentic means LLM-powered.” Not necessarily. Large language models are one enabling technology, but an agentic system might also use graph algorithms, optimization solvers, or rule engines. The defining feature is the perceive-reason-act-learn loop, not the underlying model architecture.
- “If it is not agentic, it is not valuable.” Also false. A well-built copilot that saves recruiters 10 hours a week is enormously valuable. The point is not to dismiss non-agentic tools but to label them honestly so you can architect your stack correctly. Many organizations will run copilots and agents side by side for years – the key is knowing which tool fits which problem.
The goal is not to chase a buzzword. The goal is to know exactly what you are buying – and what it can and cannot do on its own.
An agentic system perceives context, reasons about goals, acts across systems, and learns from outcomes - all without waiting for a human to press a button.
Key terms
If a product cannot perceive changing context, reason toward a goal, act across system boundaries, and learn from its own outcomes, it is not agentic - no matter what the pitch deck says.