All tracks / Foundations of Agentic HR / The session problem, the initiation problem, the learning problem

The session problem, the initiation problem, the learning problem

Three structural limitations that define the boundary between copilots and agents

10 min read Foundations of Agentic HR

Why this framework matters

If you read one article in this academy, make it this one. The three problems described here – the session problem, the initiation problem, and the learning problem – form the structural boundary between AI that assists and AI that transforms. Every limitation you will encounter in evaluating HR technology can be traced back to one of these three problems. Every architectural decision about your AI stack is, at bottom, a decision about whether and how to solve them.

These are not product complaints. They are not gaps that a vendor’s next quarterly release will close. They are inherent properties of the copilot paradigm – a paradigm that assumes a human initiates, a machine responds within a session, and the interaction leaves no lasting trace. That paradigm works brilliantly for certain tasks. But it cannot deliver the outcomes that talent leaders increasingly need: proactive workforce optimization, continuous skill development, and organizational learning that compounds over time.

Problem 1: The session problem

The session problem is the most intuitive of the three. It states: a copilot’s context, memory, and continuity exist only while a user is actively interacting with it. When the session ends, everything evaporates.

To understand why this matters, consider the experience of Kwame, a VP of Engineering at a mid-size technology company. Kwame has 14 open roles across three teams. On Monday morning, he opens his HR copilot and asks for a prioritized view of his open positions. The copilot delivers a thoughtful analysis: two roles are critical-path for the Q3 release, three are near-duplicates that could be consolidated, and four have strong internal candidates. Kwame takes notes and moves on.

By Wednesday, two things have changed. A strong external candidate who was in the final stage for one role has accepted another offer, and an internal employee on Kwame’s team has posted a transfer request that makes them available for a different role. The copilot knows none of this – because no one asked. When Kwame returns on Friday to check progress, he starts from zero. He re-asks the same question, and the copilot re-performs the same analysis, unaware that the landscape has shifted.

What the session problem costs

Dimension With session-bound copilot With persistent agent
Time awareness Snapshot of the moment the user asks Continuous monitoring between interactions
Context continuity Resets every session; user must re-explain Maintains rolling context across days and weeks
Multi-day workflows Impossible – no session spans days Native – the agent tracks progress toward goals over time
Cross-user coordination User A’s session is invisible to User B’s Shared organizational context enables coordination
Urgency detection Cannot detect a situation becoming urgent if no one is looking Detects escalating signals and raises alerts proactively

The session problem is especially damaging in HR because talent processes are inherently longitudinal. A hiring decision unfolds over weeks. A retention risk develops over months. A workforce transformation plays out over quarters. A tool that only exists in momentary snapshots cannot serve processes that live in continuous time.

Why engineering cannot fix it within the copilot paradigm

You might think: just add persistent memory to the copilot. Store the conversation history and reload it next session. Some products do this – and it helps with the narrow problem of context continuity. But it does not solve the deeper issue. A copilot with a memory log still does not run between sessions. It does not notice that Wednesday’s changes invalidated Monday’s analysis. It does not track progress on multi-day workflows. Persistent memory without persistent execution is like giving someone a diary but never letting them leave the house.

Problem 2: The initiation problem

The initiation problem states: a copilot can only respond to prompts and can never independently determine that a situation requires attention. The human must always go first.

This is the most underestimated of the three problems, because it is invisible by definition. You cannot see the value of an insight that was never surfaced. You cannot measure the cost of an action that was never initiated. The initiation problem is a problem of absence.

Consider Lucia, an HR Business Partner supporting a 400-person sales organization. On any given day, the most valuable AI intervention might be:

  • Noticing that quota attainment in the EMEA region has dropped for three consecutive months and correlating that with a spike in manager turnover
  • Identifying that a recently promoted team lead has four direct reports with overlapping skill profiles, creating a single-point-of-failure risk
  • Recognizing that a high-potential employee who was passed over for a project assignment last quarter is now showing early attrition signals
  • Detecting that the company just announced a strategic pivot that will make 30% of the sales enablement team’s current skills less relevant within 18 months

Lucia would benefit enormously from any of these insights. But she will never type any of them into a chatbox, because she does not know they exist. That is the initiation problem: the most valuable insights are precisely the ones the user does not know to ask for.

The initiation gap in numbers

Research from Deloitte and McKinsey suggests that HR leaders spend roughly 60% of their time on reactive work – responding to issues that have already surfaced. The remaining 40% is split between planned strategic work and genuinely proactive intervention. The initiation problem means that AI copilots only serve the reactive portion – the 60% – because they require a human to identify the problem before they can help solve it.

Work type % of HR leader time Copilot can help? Agent can help?
Reactive – responding to known issues ~60% Yes Yes
Planned strategic – executing known initiatives ~25% Partially Yes
Proactive – discovering unknown risks and opportunities ~15% No Yes

The 15% proactive category is where the highest-leverage interventions live. Catching attrition before it cascades. Identifying redeployment opportunities before a restructure. Surfacing skill gaps before they become performance gaps. A copilot is architecturally excluded from this category.

Why engineering cannot fix it within the copilot paradigm

Some vendors add notification layers on top of copilots – “smart alerts” or “proactive insights” that appear in a user’s feed. This is a step in the right direction, but it is not the same as solving the initiation problem. True initiation requires three capabilities: continuous environmental monitoring across multiple data sources, goal-awareness to determine which signals matter, and the authority to take or recommend action without waiting for a prompt. A notification is a broadcast. An initiation is a targeted, context-aware decision to act. The difference is the difference between a smoke detector and a firefighter.

Problem 3: The learning problem

The learning problem states: a copilot cannot observe the outcomes of its own recommendations and use that data to improve future performance. Every interaction is independent. The system never gets smarter.

Of the three problems, this one has the greatest long-term cost, because it means the organization’s investment in AI never compounds. Day 1 and Day 1,000 produce the same quality of output.

Consider this scenario. Dmitri, a recruiting manager, uses his copilot to generate candidate shortlists for software engineering roles. Over six months, Dmitri evaluates 200 shortlists. He consistently passes over candidates whose profiles emphasize certifications in favor of candidates with open-source contributions and cross-functional project experience. He also tends to prefer candidates who have made at least one lateral career move, viewing it as a sign of adaptability.

After six months and 200 interactions, the copilot has learned absolutely nothing from Dmitri’s choices. It still ranks certification-heavy profiles at the top. It still fails to weight lateral moves. Every shortlist is generated as if it were the first. Dmitri’s expertise – the very thing that makes him a good recruiter – is invisible to the system.

Three layers of learning that copilots miss

Layer What it captures Example Copilot capability
Individual preference learning A specific user’s patterns and priorities Dmitri prefers open-source contributions over certifications None
Organizational pattern learning Aggregate patterns across all users in the organization Hiring managers across the company close roles 40% faster when internal candidates receive a warm introduction from their current manager None
Outcome-based model refinement Which recommendations led to good outcomes and which did not Candidates rated “strong match” who were hired had a 60% higher 12-month retention rate than those rated “moderate match” – validating the scoring model None

Without any of these learning layers, the AI remains a static tool. It is useful in the same way a calculator is useful – it performs a function reliably, but it never becomes a better calculator because you used it yesterday.

Why engineering cannot fix it within the copilot paradigm

Learning requires a closed loop: action, outcome observation, model update. Each step of that loop requires capabilities the copilot paradigm does not provide. Observing outcomes requires persistence – the system must still be running weeks or months after the recommendation was made. Attributing outcomes requires cross-system data – did the hire work out? Did the internal move succeed? That data lives in performance systems, retention records, and project outcomes, not in the chat window. Updating the model requires a learning infrastructure that operates outside any individual session.

Some vendors simulate learning by periodically retraining models on aggregate data. This is valuable but fundamentally different from closed-loop learning. Periodic retraining is a batch process controlled by the vendor’s engineering team. Closed-loop learning is continuous, organization-specific, and tied to individual outcomes. One is software maintenance. The other is genuine intelligence.

The three problems compound

The most important insight is that these three problems are not independent – they compound. Because the system is session-bound (Problem 1), it cannot observe outcomes over time, which means it cannot learn (Problem 3). Because it is reactive (Problem 2), it cannot detect situations that no one has asked about, which means it misses the highest-leverage opportunities. Because it does not learn (Problem 3), even the reactive help it provides never improves.

The compounding effect creates a hard plateau. Organizations that invest heavily in copilots see strong initial returns – productivity gains, time savings, user satisfaction. But those returns flatten. The copilot cannot get better, cannot act on its own, and cannot sustain context. The plateau is not a failure of execution. It is a property of the architecture.

What solves the three problems

Each problem has a corresponding architectural requirement:

  • The session problem requires persistent orchestration – a system that runs continuously, maintains context, and tracks workflows across time.
  • The initiation problem requires environmental monitoring and goal-awareness – a system that observes signals across data sources and determines when action is warranted.
  • The learning problem requires outcome feedback loops – a mechanism that connects the result of every recommendation and action back to the decision model.

These are the defining architectural properties of agentic systems. Gloat’s Agentic Talent Intelligence Platform was built around precisely these requirements: persistent agents that maintain organizational awareness across time, proactive initiation based on continuously monitored workforce signals, and closed-loop learning that makes every agent smarter with every outcome it observes.

But the framework is useful regardless of vendor. When you evaluate any AI investment, ask: does this solve the session problem, the initiation problem, and the learning problem? If it solves none of them, it is a copilot – useful, but bounded. If it solves one or two, it is in transition. If it solves all three, it is genuinely agentic – and capable of compounding returns over time.

The organizations that will lead in the next era of talent management are not the ones that adopted AI first. They are the ones that understood the structural limits of their current tools and made the architectural shift before the plateau became permanent.

Key insight

These three problems are not bugs in current products. They are boundaries of a paradigm. Understanding them changes how you evaluate every AI investment.

Key terms

Session problem
The structural limitation in which a system's context, memory, and continuity exist only during an active user interaction.
Initiation problem
The structural limitation in which a system can only respond to user prompts and never independently identify situations that require action.
Learning problem
The structural limitation in which a system cannot observe the outcomes of its recommendations and use that data to improve future performance.
Persistent orchestration
An architectural pattern in which an AI system maintains ongoing awareness and the ability to act across sessions, users, and time.
Environmental monitoring
The continuous observation of signals across systems and data sources to detect conditions that warrant action.
Outcome feedback loop
A mechanism that connects the result of a system's action back to its decision model, enabling iterative improvement.
The bottom line

The session problem means copilots forget. The initiation problem means copilots wait. The learning problem means copilots never improve. Together, these three structural limitations explain why copilots plateau - and why organizations that want AI to transform talent outcomes, not just accelerate tasks, need a different architecture entirely.