There are a few steps leaders can take to ensure their organizations are using AI responsibly. Best practices include:
#1. Ask about models and data sheets
Leaders shouldn’t shy away from asking about the models that are powering various AI systems they’re considering. In fact, Goldman encourages executives to engage in a dialogue with potential vendors. “You need to ask, is there a model or is there a data sheet? What’s going into these models? What are the standards that are associated with these models?” she suggests.
Goldman is optimistic that these conversations will lead to the creation of a set of guidelines leaders can use to evaluate future AI systems. “What’s exciting is that there’s momentum and convergence toward thinking not quite about standardization but convergence towards sets of metrics that we can compare across different products,” she explains.
#2. Test the data that’s going into these systems
Executives also need to have an idea of what goes into the implementation process for any AI system they’re considering, according to Goldman. “You need to consider the safeguards that are up in terms of testing. AI is only as good as the data that goes into it. So make sure that when it is implemented with one’s own data, first test the data that’s going into it, continue tests around the fairness of the models themselves, and then obviously the control about what data goes in and goes out,” she says.
#3. Keep the people who will be using these systems top of mind
As the people who will be using the technology directly, leaders must prioritize their employees’ needs and potential use cases. Every decision should be made with them in mind, and that means considering how these technologies will be received and the potential indirect consequences associated with launching them.
“Some consequences are indirect, but they have a big impact,” explains Chief AI and Data Strategist at Telefonica Richard Benjamins. “So if you’re an organization and you see those things, you have to act. You could say ‘My product complies with all the rules, 10% of people have damage, but 90% are enjoying it.’ But if you take a human-centered approach, not a utilitarian approach, then that 10% is still a lot of people. If you cause any harm, then you have to take that into account.”
Kagan echoes a similar sentiment, noting, “Leaders need to understand and drill down into the potential risks and see what they can do about them. We’re gonna run into similar issues to what we’re running into with other software. But now you’re running into an issue of there’s going to be off-the-shelf AI products that you can buy. Do you know what the risks are? Are you able to do anything about it?”.
#4. Take risk mitigation one step further
Beyond safeguarding their organization from potential risks, leaders at responsible organizations will take their ethical AI efforts one step further by identifying new opportunities to harness these systems to drive positive changes. As Benjamins explains, “More and more things like ESG are becoming important. So as an organization with this powerful technology, you have to think about what are the potential negative ethical impacts. But you also have to think about how you can use this technology and this data for good, to solve big societal and planetary problems.”