Understanding Ethical AI: What HR Leaders Need To Know

Avatar photo
By Nicole Schreiber-Shearer, Future of Work Specialist at Gloat
Trulli

AI innovations have reached a critical turning point. After years of speculating about how automated technologies will transform the way we work, we’re now starting to see the impact of these innovations for ourselves.

From drafting emails to writing lines of code, AI has the potential to enhance and augment a wide array of our daily tasks. Yet, as use cases for automation become more mainstream, new concerns about how to use these systems responsibly are rising to the surface.

Some leaders fear that AI will exacerbate biases, while others are concerned about compromising employees’ privacy. Although both of these issues should be top of mind for executives, the good news is that AI systems that are built ethically will virtually never put confidentiality or fairness at risk. The more leaders learn about what responsible AI usage looks like, the better equipped they will be to harness systems that adhere to the highest standards of ethics.

What is ethical AI?

Ethical AI refers to the development and deployment of artificial intelligence systems that emphasize fairness, transparency, accountability, and respect for human values.

Organizations that are committed to ethical AI usage create policies to document their company’s values and guidelines, and create well-defined review processes to shape how these standards of responsibility are put into practice.

While legal guidelines set a basic limit, mitigating bias and privacy infringement effectively requires further action. For example, an AI algorithm that may indirectly lead people to engage in self-destructive behavior might be legal to use, but it won’t be considered ethical.

What are the benefits of using ethical AI?

Whether you’re working with partners that prioritize ethical AI use or building your own algorithms, there are several benefits associated with using ethical AI, particularly when it comes to talent management. Some top advantages include:

Mitigate bias

AI tools that are blind to data such as gender, race, ethnicity, and ability can mitigate much of the bias that’s often part of traditional talent management decisions. Systems should be engineered and monitored to ensure that implicit bias is not being picked up as part of the machine-learning process. In doing that, AI-powered tools can remove many of the career growth barriers that typically hold employees from underrepresented groups back, paving the way for more equitable pathways for growth.

Enhance human decision-making processes

Humans are inherently biased, skewed, and limited in their decision-making. By augmenting decision-making with bias-mitigated recommendations, organizations can help promote equitable practices across their company. From junior employees to your most tenured executives, ethical AI can help promote visibility into opportunities for employees based on their qualifications and deliver well-rounded and diverse candidate pools.

Show employees what they are capable of

Traditionally, barriers and self-limiting beliefs shrunk talent pools and discouraged qualified internal candidates from stepping into new projects and roles. Rather than letting these obstacles hinder career mobility, AI shows employees what they’re truly capable of achieving by generating suggestions for projects, roles, and gigs, based on the skills they bring to the table.As Seagate’s Global Head of Talent Marketplace and TA Transformation Divkiran Kathuria explains, “A lot of women don’t tend to apply to an opportunity that isn’t a 100% fit. We hold ourselves back. But that’s the beauty of [AI systems like] the talent marketplace. It’s not asking you to sell yourself. The platform is telling you that this is an opportunity that’s fit for you.”

The 5 key principles of ethical AI

While every company should create its own standards for ethical AI usage that align with employee, organizational, and industry needs, here are a few guiding principles based on our own approach to responsible AI design:

#1. Algorithms are built to enhance the human decision process, not replace it

AI should help people make informed decisions, rather than obscuring, limiting, or skewing their perspectives. Ideally, AI should be built to empower the people who come into contact with it, regardless of their background or level of seniority. AI vendors should be committed to remaining ‘recommendation-based’ and empowering users to make more informed decisions based on the suggestions they generate.

#2. Fairness as a North Star

AI should treat individuals and groups fairly, without preconceptions, prejudices, or discrimination. Systems must be careful with demographic data and ensure it does not influence the recommendations that are generated. They should also be engineered and monitored to ensure that implicit biases—relying on subtle patterns present in the data—are not being picked up as part of the learning process. The quality of the data is also essential—companies need to consider what data the AI is analyzing and put practices in place to avoid perpetuating existing biases.

#3. Proactive monitoring and auditing

AI vendors should be committed to regular algorithmic auditing. AI should be continuously audited and reviewed and vendors should take a proactive approach to prevent bias creep. In practice, companies can enforce policies on when to use data and build models, implement rigorous review processes for models going to production, and periodically monitor and test models to detect and mitigate bias creep.

#4. Transparency

AI should be transparent and explainable, which in practice means that users can learn why certain candidates and opportunities are suggested. Clarifying what made a particular suggestion surface enables fairer and more equal decision-making capabilities. Vendors should strive to make AI considerations as clear as possible to the users it affects, removing the ‘black box’ that often surrounds AI-based suggestions. Systems should also promote transparency in the user interface and provide the required context for understanding the information presented.

#5. Accountability

People who play a part in creating AI must be accountable for the systems they create and the outcomes of their use. AI systems should follow a ‘human-in-the-loop’ model, a branch of AI that brings together AI and human intelligence to create machine learning models. This includes creating strong data privacy and performing frequent algorithmic audits during which data scientists comb through source code and outputs to ensure fairness.

What are some of the challenges associated with ethical AI?

While ethical AI can help companies minimize discrimination and democratize career development, there are challenges associated with ethical AI, including:

AI may not always be the answer

As more new use cases for AI emerge, leaders are struggling to determine which are a good use of their time and investment—and which they should avoid. Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, predicts that executives are quickly becoming more strategic about their AI use, noting, “I think people are becoming smart about what these new tools are good for and what they’re not good for. I think we’re seeing the emergence of a set of norms and standards for how we interact with these tools responsibly.”

Data collection concerns must be taken seriously

The ability to use AI ethically is reliant on feeding AI models data that is sourced responsibly. According to Goldman, data collection needs to be a top consideration for leaders who are thinking about implementing AI-powered systems. “We often have customers coming to us very excited about AI and we’re obviously very excited as well. But the first question needs to be ‘How is your data?’’ she explains. Leaders must ensure employees consent to their data being collected and that they understand how it will be used.

Leaders can’t view AI as a cure-all

While some executives may be tempted to experiment with as many AI use cases as they can, experts like Goldman encourage leaders to take a more selective approach. She compares today’s AI to a “really good assistant”, noting that there are “…a lot of limitations, there are sometimes problems around accuracy and toxicity. But there’s no doubt that if you’ve read any number of headlines, there are also some serious workplace transformation issues we’re going to need to grapple with as a society.”It’s important for leaders to consider the direct and indirect consequences of each AI system they’re evaluating and recognize that while AI-powered tools can be incredibly useful, they won’t solve all of the challenges their organizations are facing.

Best practices for HR leaders looking to implement ethical AI for talent management

The emergence of ethical AI comes at a crucial time for talent management teams. HR leaders are grappling with complex challenges, including global skills shortages, rising burnout levels, and ongoing economic uncertainty. By processing enormous amounts of data at speed and scale, new AI-powered talent management tools can equip leaders with unparalleled insights into their workforce’s skills, ambitions, and potential.

Although HR teams may be tempted to experiment with many systems at once, leaders must take the time to understand how each AI-powered tool will be used and reflect on its potential consequences. AI hiring software is one example of an innovation that can help level the playing field—as long as executives ensure their systems create data-driven suggestions and weigh the value of tapping into internal candidates as well. The tools should tell hiring managers why candidates are a good fit and compare internal and external applicants side-by-side to pave the way for fair and effective hiring decisions.

In addition to carefully selecting which AI-powered software to use, Google’s Senior Product Inclusion and Equity Program Manager Sydney Coleman emphasizes the importance of reevaluating the language used in job descriptions to ensure bias or coded terms aren’t leading to candidate discrimination. “I think there has been a big expansion into using AI technology specifically around inclusive language. Having a human element to QAing is crucial and so is leveraging technology to flag things like elitist language around where or whether people went to college.”

Talent marketplaces are another AI-powered system that’s quickly becoming popular with HR leaders—and it’s easy to understand why.  While the platforms are known to amplify internal mobility efforts and democratize career development, workflow and mindset changes must be prioritized in tandem with a talent marketplace launch to drive maximum impact.

HR leaders must encourage hiring managers to stop viewing talent as “theirs” and to begin shifting their perspective into an abundance mindset that considers all team members across the organization as shared resources that they can tap to complete projects and gigs. By equipping managers with the right technology and creating a culture that empowers all employees to participate in cross-functional opportunities, talent can be redeployed seamlessly and efficiently.

To learn more about ethical AI’s game-changing use cases for HR strategy, check out our guide, Transforming talent management with ethical AI.

Gloat earns a spot on the 2024 Deloitte Technology Fast 500™

Learn more →

Related