5 bold predictions for the future of ethical AI, from 3 experts
Experts at Salesforce, Telefonica, and Fox Rothschild foreshadow what our future with AI will look like
AI is making headlines today—and for good reason. The rise of generative AI opens up a new world of potential use cases that range from composing emails to songwriting to creating breathtaking images. While these innovations present an exciting opportunity to enhance and augment work, they’re also surfacing several concerns, including fears that the systems are biased and will make many jobs obsolete.
As AI continues to rapidly evolve, leaders are struggling to separate doom and gloom predictions from indisputable facts. Although no one can say with absolute certainty how AI will impact the working world, there are a handful of thought leaders who’ve studied the systems that can weigh in on their benefits and the steps leaders must take to reduce potential risks.
We recently sat down with three such experts: Paula Goldman, Chief Ethical and Humane Officer at Salesforce; Odia Kagan, Partner and Chair of GDPR Compliance and International Privacy at Fox Rothschild; and Richard Benjamins, Chief AI and Data Strategist at Telefonica. Together, they discussed their best practices for ensuring AI use is ethical as well as their top predictions for how these innovations will impact and profoundly change the way we work.
5 predictions about how ethical AI will transform the way we work
While some people may fear that AI is going to take over their jobs, the experts we talked to painted a cautiously optimistic picture of our future with AI. Some of the most important predictions for leaders to keep on their radar include:
#1. Legislation is going to catch up to the speed of innovation
Right now, AI is accelerating at an alarming rate—and rules and standards for how to use it ethically aren’t keeping up. Fortunately, our experts don’t think this will always be the case. As Goldman explains, “I think this uptick in urgency from governments around the world to wrap their heads around the situation with AI right now means that we will have regulatory frameworks to depend on sooner rather than later.”
As the resident lawyer of the group, Kagan was quick to share her insights, which support Goldman’s theory. She compares what AI super fans are saying versus nayysayer’s words of warning, noting, “A lot of people are saying ‘Oh, this is the end of the world ’ and there’s a lot of people and bots that are saying ‘No, no, you’re fine, keep using us.’…As a person who deals with regulations for a living, I think it’s obvious that this needs to be reigned in.”
#2. Businesses will get smarter about what to use AI for and how to do it ethically
As more new use cases for AI emerge, leaders are struggling to determine which are a good use of their time and investment—and which they should avoid. Goldman explains that executives are quickly becoming more strategic about their AI use, noting, “I think people are becoming smart about what these new tools are good for and what they’re not good for. I think we’re seeing the emergence of a set of norms around how we interact with these tools responsibly.”
To summarize the status of some recent innovations, Goldman says, “Right now, generative AI is like a really good assistant. It has a lot of limitations, there are problems around accuracy, there’s sometimes problems around toxicity.” Over time, she thinks we’ll devise a system so that everyone can ensure AI is being used ethically. “I’m very encouraged by this question of measurability and standards…. It will be a moving target. But my prediction is that there will be a set of standard questions that people ask about systems to compare them.”
#3. Many AI challenges will soon be solved through engineering
Fears about potential biases are one of the most common concerns people raise regarding the rise of AI. While the algorithms that some systems use may inadvertently lead to bias today, Benjamins doesn’t think this will be a long-term problem. “I think the direct consequences of the use of AI, like bias, opacity, explainability, and autonomy, those things will be solved, as an engineering part,” he notes. “We will understand how it works. It will take some time, but we can do many things.”
#4. The social and humans consequences of AI innovation will require an all-hands-on-deck effort
While engineering can solve some of AI’s consequences, overcoming the challenges that indirectly arise from these innovations will be much less straightforward and likely to require collaboration from leaders, parents, and legislators.
“The indirect consequences of AI, like technology addiction or addiction to social media, anorexia, because of these strong algorithmic recommendations and fake news, polarization of societies, those are the consequences that we need to work on because those are not under control and all of the regulations in the world that are currently happening won’t fully address them,” Benjamins cautions.
Rather than putting the onus entirely on legislation to keep AI’s potential risks in check, Kagan believes that regulation needs to come from multiple sources, including leaders and parents. “The people are always gonna look for magical solutions. I see how my kids are doing schoolwork now, and I see that I put in a lot more effort. So we will have to find replacements for that. I think we need both kinds of regulation, as well as self-regulation, as well as better parenting to deal with this.”
#5. Using AI ethically will be a prerequisite for success
Today, there are very few tools or systems in place that can assess how ethical a company’s use of AI is. However, soon businesses won’t be able to get away with leveraging these innovations irresponsibly, according to Goldman’s predictions. “I think this is a discipline that’s been evolving for several decades, but I think we’re gonna see in every company an emphasis around it because there’s gonna be no way to successfully operate in the digital sphere without deep operationalization of AI ethics across the board.”
Kagan shares a similar belief that ethical AI usage will become the new standard, particularly because organizations won’t want to risk losing the data they rely on. “It’s true that you could be too big to fail and too big to find,” she notes. “But other pieces, like injunctions or like the requirement to delete data that was collected and processed illegally, which we have seen on both sides of the ocean, those are a big deal because you need the data, right? And then now what?’.
To uncover the ethical AI tools that are changing the way we work for the better, check out our guide, Transforming talent management with ethical AI.