Gloat’s commitment
to ethical AI

Ethical, explainable and transparent Artificial Intelligence (AI) is generating a burst of global interest which is echoing through our industry and community. But while we’re seeing encouraging investment in auditing and de-biasing, issues of AI ethics are not resolved. We know that AI and data are prone to bias, which is why throughout Gloat’s seven years of existence—and across billions of data points, 2.5 billion job descriptions, 500 million profiles, and 300 million new data points per day—we’ve maintained an ongoing commitment to de-biasing practices, and remain dedicated to preventing and mitigating the potential negative impacts of AI.

Gloat’s commitment to de-biasing is articulated around five key ethical pillars:

  • Enhancing—but not replacing—human awareness and decision making
  • Fairness as a north star
  • Proactive monitoring and auditing
  • Ongoing transparency
  • Accountability

Consumer-first AI: algorithms built to enhance the human decision process, not replace it

AI should help people make informed decisions, rather than obscuring, limiting, or skewing their perspectives. Gloat’s AI is built to empower the people who come into contact with it, from junior employees to high level decision-makers. We’re committed to remaining ‘recommendation-based’ and to empowering our users with more informed decision-making.

Our multi-strategy matching ensures varied talent recommendations for opportunities, giving transparent insight into why each particular profile is showing up.

Fairness as a north star

AI is central to Gloat’s systems and to the organizations which implement it. Consequently, Gloat’s AI should treat individuals or groups fairly, without preconceptions, prejudices, or discrimination. To ensure this, our AI does not digest any demographic data or proxies. This means that our system is blind to data such as gender, race, ethnicity, ability, as well as university name, native language, and other data which could contribute to bias. In addition, the system and the algorithms are engineered—and monitored—to ensure that implicit bias, relying on subtle patterns present in the data, is not being picked up as part of the learning process. Our multi-strategy matching ensures varied talent recommendations for opportunities, giving transparent insight into why each particular profile is showing up.

Proactive monitoring and auditing

Gloat is committed to regular algorithmic auditing. Our AI is continuously audited and reviewed, and everyone involved in its development takes a proactive approach to prevent bias creep, such as enforced policies for using data and building models, rigorous review processes for models going to production, and periodical monitoring.

Multiple controls are deployed in parallel to reduce the likelihood of bias creep, while simultaneously improving the ability to detect bias creep:

  • Continuous self-monitoring of models
  • Third-party audits
  • Customers have the ability to test how models behave in their environment


We are committed to transparent and explainable AI. In practice, this means that users can learn why certain candidates and opportunities are suggested. Clarifying what made a particular suggestion surface enables fairer and more equal decision-making capabilities, which aligns our values of equality and inclusion.

Gloat strives to make AI considerations and decisions as clear and transparent as possible to the users it affects, removing the ‘black box’ that often surrounds AI-based suggestions. As a recommendation engine, the system promotes transparency in the user interface, and provides the required context for understanding the information presented.


Gloat’s managers and employees are accountable for the AI they create and the outcomes of its use. All employees involved must ensure that the design and implementation of artificial intelligence components adhere to the five principles we’ve detailed here and must maintain the objectiveness of the system they’re responsible for. Our AI system follows a ‘human-in-the-loop’ model, a branch of AI that brings together AI and human intelligence to create machine learning models.

Finally, Gloat incorporates strong data privacy, and we also perform frequent algorithmic audits during which data scientists comb through source code and outputs to ensure fairness.