The translation problem
Every vendor in the HR tech space is racing to claim the “agentic” label. This is understandable. The market wants agents. Analysts are writing about agents. RFPs are asking for agents. So vendors describe what they have using the language of what the market wants.
The result is a gap between what marketing says and what the product does. This gap is not usually intentional deception. It is the natural outcome of a market moving faster than product cycles. But the gap is real, and if you do not account for it, you will make purchasing decisions based on slide-ware.
This decoder table is your field guide. Left column: what you will hear in demos and read in RFP responses. Right column: what it usually means in practice. Use it to generate follow-up questions, not to dismiss vendors outright.
The decoder table
| What the Vendor Says | What It Usually Means | The Question to Ask |
|---|---|---|
| “Our platform is AI-native” | AI features were added to an existing platform in the last 1-3 years. The core data model and architecture predate the AI layer. | “When was your first production AI feature shipped, and what percentage of your codebase was rewritten for AI?” |
| “We have 50+ AI agents” | There are 50+ prompts with different system instructions, each handling a specific task. They do not coordinate, share memory, or operate autonomously. | “Can two of these agents collaborate on a single task? Do they share persistent memory? Can they act without a user prompt?” |
| “Powered by GenAI” | An LLM generates text in one or more features. Summaries, job description drafts, or conversational interfaces. The AI generates content, not decisions. | “Beyond text generation, what decisions does the AI make autonomously? What actions can it execute?” |
| “Intelligent automation” | Rules-based workflow automation with an AI-generated summary or recommendation at one step. The “intelligence” is a thin layer on top of traditional automation. | “Walk me through a scenario where the system deviates from the predefined workflow based on contextual reasoning.” |
| “Skills-based organization enablement” | The product has a skills taxonomy (a list, not an ontology) and can tag employees with skills from that list. Matching is keyword-based, not semantic. | “How does your system handle skills it has never seen before? Show me how it determines that ‘ML engineering’ and ‘machine learning development’ are the same skill.” |
| “Agentic workflows” | Existing workflows that now include an LLM-generated recommendation at one or more decision points. The workflow structure is still predefined and deterministic. | “If conditions change mid-workflow, does the system re-evaluate and adjust the remaining steps? Or does it follow the predefined path regardless?” |
| “Proactive insights” | A scheduled report or dashboard notification. The “proactivity” is a cron job that runs on a timer, not an agent that detects a condition and acts. | “What event triggers this insight? Is it a schedule or a detected condition? What happens after the insight is surfaced? Does the system take action, or does it stop at notification?” |
| “End-to-end talent intelligence” | Analytics dashboards covering multiple talent domains (recruiting, performance, retention). The data is displayed, not acted upon. “Intelligence” means “reports.” | “Show me a scenario where a talent insight triggers an automated action without human initiation. What is the full chain from insight to outcome?” |
| “Seamless integration” | The product has a REST API and some pre-built connectors. Integration requires configuration, mapping, and ongoing maintenance. “Seamless” means “possible.” | “How many production customers run this integration today? What is the average implementation time? What breaks when the source system schema changes?” |
| “Enterprise-ready AI governance” | There is a settings page where administrators can toggle features on/off and set basic role-based access. Governance means “admin controls.” | “Show me the audit trail for an AI-generated recommendation. Can I see the data sources, reasoning chain, and confidence score that produced this output?” |
| “On our roadmap for Q3” | Engineers are aware of the concept. There may or may not be a design document. Q3 is optimistic. Q1 next year is realistic. Cancellation is possible. | “Is there a beta customer running this today? Can I speak with them?” |
| “Our skills graph” | A relational database table of skills with parent-child relationships. Not a graph database. Not a knowledge graph. A hierarchy stored in SQL. | “What database technology stores your skills relationships? How many relationship types exist beyond parent-child? How frequently are new relationships inferred from data?” |
How to use this in practice
Do not bring this table to a demo and play “gotcha.” That is adversarial and unproductive. Instead, use it as a preparation tool:
- Before the demo. Review the vendor slides. Identify which phrases from the left column appear. Prepare the corresponding questions from the right column.
- During the demo. When you hear a phrase, note it. Ask the corresponding question at the appropriate moment. Listen for specificity in the answer. Vague answers (“We handle that”) are a signal. Specific answers (“Here is how the agent evaluated three options and selected the retention intervention“) are evidence.
- After the demo. Score the vendor on the gap between claim and demonstrated capability. A small gap is normal. A large gap is a risk factor. Document the gaps and revisit them during reference calls.
The meta-pattern
Across all twelve phrases, a single pattern emerges: vendors describe the outcome they want to deliver as if it were the current capability. “Proactive insights” is the aspiration. “Scheduled dashboard refresh” is the reality. The aspiration is genuine. The timeline is the question.
Your job as a buyer is to separate current capability from future aspiration, then decide whether you are buying what exists or investing in what might exist. Both are valid strategies. But they require very different contract structures, timelines, and risk assessments.
Read next
This article is meant to be printed (or bookmarked) and brought to vendor demos. When you hear a phrase from the left column, mentally substitute the right column. Then ask the vendor to prove otherwise.
Key terms
Marketing language is not lying. It is optimistic framing. Your job is to translate the optimism into architecture questions. Every phrase in the left column has a corresponding technical proof point you can request. Do so.