5 ethical AI mistakes many companies make—and what to do instead

Avatar photo
By Nicole Schreiber-Shearer, Future of Work Specialist at Gloat
Trulli

How to sidestep the most pressing challenges associated with AI innovations

 

If it seems like calls to regulate AI keep getting louder, it’s not your imagination. While AI-powered innovations have the potential to revolutionize the way we work, there are some genuine concerns that must be addressed to ensure these systems are being used responsibly.

In fact, executives from some of the leading AI companies, including OpenAI, have gone so far as to say that AI may pose existential threats to our society if it’s not regulated properly. While legislators are working on getting laws on the books, leaders have an AI-related task of their own to prioritize: ensuring the systems they’re using are ethical and don’t create bias or compromise employees’ privacy.

Since many companies have only recently begun using AI, there’s still a lot of uncertainty about the guidance and guardrails leaders should put in place. Fortunately, we had the chance to connect with three leading ethical AI experts to learn more about responsible AI usage—and the top mistakes to watch out for when implementing these systems.

The most common AI mistakes leaders make—and insider secrets to solve these problems

Now that so many executives are eager to harness AI to streamline workflows and supercharge efficiency, we reached out to three ethical AI experts for their best practices: Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce; Odia Kagan, Partner and Chair of GDPR Compliance and International Privacy at Fox Rothschild; and Richard Benjamins, Chief AI and Data Strategist at Telefonica.

In addition to learning about the initiatives that leaders must prioritize, we also uncovered a few mistakes executives should avoid at all costs, including:

#1. Not taking data collection concerns seriously

Data collection needs to be a top consideration for any leader who is thinking about implementing AI-powered systems. “We often have customers coming to us very excited about AI, and we’re obviously very excited as well. But the first question needs to be, ‘How is your data?”, explains Goldman, emphasizing the need to ensure employees consent to their data being collected and that they understand how it will be used.

Kagan agrees and encourages leaders to ask themselves the following questions to ensure their organization’s data usage practices are responsible: “Can I do it better? Can I accomplish this in a better way? Is there a better and less invasive way to get this information? Can I get some of this through anonymous surveys instead?”

#2. Failing to take a people-first perspective

Since employees are the ones who will be using new AI-powered systems directly, leaders must prioritize their needs. Every decision should be made with their people in mind, and that means considering how these technologies will be received and the potential indirect consequences associated with launching them.

“Some consequences are indirect, but they have a big impact,” explains Chief AI and Data Strategist at Telefonica Richard Benjamins. “So if you’re an organization and you see those things, you have to act. You could say ‘My product complies with all the rules, 10% of people have damage, but 90% are enjoying it.’ But if you take a human-centered approach, not a utilitarian approach, then that 10% is still a lot of people. If you cause any harm, then you have to take that into account.”

Kagan echoes a similar sentiment, noting, “Leaders need to understand and drill down into the potential risks and see what they can do about them. We’re gonna run into similar issues to what we’re running into with other software. But now…there’s going to be off-the-shelf AI products that you can buy. Do you know what the risks are? Are you able to do anything about it?”.

#3. Forgetting to ask about models and data sheets

Rather than shying away from asking about models and data sheets, Goldman encourages executives to engage in an open dialogue with any potential vendors. “You need to ask, ‘Is there a model or is there a data sheet? What’s going into these models? What are the standards associated with these models?’” she explains.

While executives must ask these questions now, Goldman is optimistic that it may not be necessary in the future, once we have standardized guidelines in place. “What’s exciting is that there’s momentum and convergence toward thinking, not quite about standardization, but convergence towards sets of metrics that we can compare across different products,” she concludes.

#4. Viewing AI as a cure-all

While some leaders may be tempted to experiment with as many AI use cases as they can, experts encourage executives to take a more selective approach. Goldberg compares today’s generative AI to a “really good assistant”, noting that there are “…a lot of limitations, there are problems around accuracy and toxicity. But there’s no doubt that if you read any number of headlines, there are also some serious workplace transformation issues we’re going to need to grapple with as a society. And you see that playing out already, almost faster than anyone might have anticipated.”

#5. Waiting on engineering to solve all AI risks

Leaders can’t sit back and hope that more advanced AI solutions will mitigate all of the risks associated with these systems. While engineering can solve some consequences, overcoming the challenges that indirectly arise from these innovations will likely require an all-hands-on-deck approach, as Bennjmains explains. “The indirect consequences of AI, like technology addiction or addiction to social media, anorexia, because of these strong algorithmic recommendations and fake news, those are the consequences we need to work on because those are not under control and all of the regulations in the world that are currently happening won’t fully address them,” he says.

Rather than putting the onus entirely on legislation to keep AI’s potential risks in check, Kagan believes that regulation needs to come from multiple sources, including leaders and parents. “The people are always gonna look for magical solutions. I see how my kids are doing schoolwork now, and I see that I put in a lot more effort. So we will have to find replacements for that. I think we need both kinds of regulation, as well as self-regulation, as well as better parenting to deal with this.”

To learn more about the best practices leaders can harness to use these systems responsibly, check out our guide, Transforming talent management with ethical AI.

Your skills-powered transformation starts NOW

Join our 4-part series →

Related