All tracks / Agent Use Cases in Practice / Flight risk is not a score – it is a pattern

Flight risk is not a score – it is a pattern

Traditional flight risk models produce a number. Agents detect behavioral signal patterns weeks before resignation - and explain what the signals actually mean.

8 min read Agent Use Cases in Practice

The scenario

Carlos Reyes is an HRBP supporting a 200-person product organization. On a Thursday morning, he opens Teams and sees a notification from the HR agent:

“Retention alert: Wei Zhang, Senior Product Designer, Product Team B. I am detecting a pattern that matches pre-departure behavior in similar employees. Here is what I am seeing.”

Carlos has received flight risk alerts before. In his previous organization, they came from a dashboard that assigned every employee a score from 0 to 1. He never found them useful. A score of 0.73 tells you nothing about what is happening or what to do about it. This notification is different.

How this works today

Flight risk prediction has been one of the most discussed – and least useful – applications of analytics in HR. The typical approach works like this:

A model ingests data points: tenure, time since last promotion, compensation ratio, manager change history, and sometimes engagement survey scores. It produces a number. Employees above a threshold get flagged. The HRBP receives a list of names with scores.

The problems with this approach are well documented:

Scores without context. A flight risk score of 0.7 does not tell Carlos why Wei might leave or what he should do about it. Is it compensation? Is it career stagnation? Is it a manager relationship issue? The score collapses all of these into a single number that obscures the actionable information.

Stale inputs. Most models run on data that is weeks or months old. Engagement surveys happen quarterly at best. Compensation benchmarks update annually. By the time the model flags someone, the window for intervention may have already closed.

False positives that erode trust. When a model flags 30 people and only 3 of them were actually at risk, managers stop paying attention. The signal-to-noise ratio of traditional flight risk models is low enough that many HRBPs treat them as background noise.

No pattern recognition. A score does not distinguish between “this person is underpaid but loves their work” and “this person is underpaid, disengaged, and has stopped learning.” These are fundamentally different situations that require fundamentally different responses. A single number cannot capture this.

What the agent does differently

The notification Carlos received does not contain a score. It contains a pattern. Here is what the agent shows him:

“Wei Zhang – 4 behavioral signals detected over the past 6 weeks:

1. Compensation position: Wei’s total comp is now 12% below the 50th percentile for Senior Product Designers in the Bay Area market. This gap has widened from 4% over the past two quarters.

2. Learning activity: Wei completed 3 learning modules per month for the past year. In the last 6 weeks, this dropped to zero.

3. Collaboration pattern: Wei’s cross-team collaboration (measured by project participation and communication patterns) decreased 35% over the past month.

4. Career development: Wei had a career conversation with her manager in October where she expressed interest in moving to a design lead role. No follow-up actions were taken. The gig project she was recommended for was filled by someone else.

Pattern match: This combination of signals – widening comp gap + learning disengagement + collaboration withdrawal + stalled career development – matches the pre-departure pattern for senior individual contributors with 3+ years tenure. In similar cases, 68% of employees resigned within 90 days.

Recommended actions: I have prepared three intervention options based on the specific signals detected. Want me to walk through them?”

This is not a score. It is a story that Carlos can evaluate, contextualize with his own knowledge of Wei and the team, and act on with specificity.

What is different

Dimension Traditional flight risk Agent pattern detection
Output A score (0.73) A pattern with named signals and historical match rate
Explainability Low – score derived from weighted model features High – each contributing signal is visible and named
Data freshness Quarterly survey data, annual comp benchmarks Continuous behavioral signals across multiple systems
Actionability “This person is at risk” (no guidance on why or what to do) Specific signals with matched intervention options
False positive handling Lists of flagged names; managers learn to ignore Pattern matching with confidence levels and signal transparency
Timing Model runs monthly or quarterly Continuous monitoring with alert when pattern threshold is met

Behind the chat: what makes this work

Multi-system signal correlation. The power of pattern detection comes from connecting signals across systems that traditionally do not talk to each other. Compensation data lives in the HRIS. Learning activity lives in the LMS. Collaboration patterns live in project and communication tools. Career development data lives in the talent management system. No single system has enough context to detect the pattern. The Workforce Context Engine unifies these signals into a single view that the agent can reason across.

Behavioral baselines, not static thresholds. The agent does not flag Wei because her learning activity is “low” in absolute terms. It flags her because her learning activity dropped significantly relative to her own baseline. An employee who never used the LMS would not trigger this signal. An employee who used it consistently and then stopped does. This baseline approach dramatically reduces false positives because it is detecting change, not comparing against an arbitrary threshold.

Historical pattern matching. The pre-departure fingerprint – the specific combination and sequence of signals that precede voluntary resignation – is derived from analyzing actual departures across the organization. The agent knows that “comp gap + learning disengagement + collaboration withdrawal” is a high-confidence pattern because it has been validated against historical data. Different patterns emerge for different populations: early-career employees show different pre-departure signals than senior leaders, and individual contributors show different patterns than managers.

Signal sequencing. The order in which signals appear matters. Compensation gap followed by learning disengagement is a different pattern than learning disengagement followed by compensation gap. The first suggests external pull (someone is being recruited and the comp gap makes it attractive). The second suggests internal push (someone has checked out and then noticed they could earn more elsewhere). The agent tracks sequence because sequence informs the right intervention.

Governed alerting. Not every signal combination triggers an alert. The agent applies confidence thresholds calibrated to avoid alert fatigue. Carlos does not get notified about every employee who skipped a learning module. He gets notified when the combination of signals, their magnitude, their sequence, and their match against historical patterns crosses a threshold that warrants attention. The goal is signal, not noise.

The fundamental shift here is from prediction to explanation. Traditional models try to predict who will leave. Agents try to explain what is happening and why – which gives the human on the other end something they can actually work with.

Key insight

A flight risk score of 0.73 is not actionable. A pattern that says "compensation fell below market, engagement dropped, and development activity stopped - this matches the pre-departure pattern we see in 68% of voluntary exits" is actionable. The difference is not precision. It is explainability.

Key terms

Behavioral Signal Pattern
A combination of observable changes across multiple systems - engagement, learning activity, collaboration, compensation position - that together indicate a shift in employee trajectory.
Pre-departure Fingerprint
The specific combination and sequence of behavioral signals that historically precede voluntary resignation. Varies by role type, tenure, and seniority level.
Explainable Risk
A risk assessment that includes not just a probability but the specific signals that contributed to it, enabling the recipient to evaluate the assessment and choose an appropriate response.
The bottom line

Flight risk detection becomes useful when it shifts from a score to a pattern. Agents that correlate behavioral signals across systems, match them against historical departure patterns, and explain the reasoning in plain language give managers and HRBPs something they can actually act on - not a number, but a story.