Stakeholders from underrepresented groups should be involved in the integration of AI into business processes or functions.
The nonprofit organization EqualAI has made public its report on best practices for putting 11 responsible, ethical principles around artificial intelligence in place. EqualAI defines responsible AI as “safe, inclusive and effective for all possible end users.” The principles include privacy, transparency and robustness.
Executives from Google DeepMind, Microsoft, Salesforce, Amazon Web Services, Verizon, the SAS Institute, PepsiCo, customer engagement company LivePerson and aerospace and defense company Northrop Grumman co-authored the report.
The report does not specify generative AI. Instead, its scope includes all “complex AI systems currently being built, acquired and integrated.”
Jump to:
The EqualAI Responsible AI Governance Framework includes 11 principles:
It also includes six central pillars:
The framework is based on EqualAI’s Responsible AI Badge Program, which is a certification track to help corporate leaders reduce bias in AI.
“At EqualAI, we have found that aligning on AI principles allows organizations to operationalize their values by setting rules and standards to guide decision making related to AI development and use,” said Miriam Vogel, president and CEO of EqualAI, in a press release.
According to the report, key steps to adopting an effective, responsible AI strategy include:
An organization’s responsible AI framework might be customized to its existing company values and how it already uses AI. The goal isn’t to reach “zero risk,” which isn’t truly possible; instead, companies should work on creating a culture of responsible AI governance. Performance recognition, pay and promotion incentives could be tied to AI risk mitigation efforts.
Investing in responsible AI practices is good for business as well as for humanity, EqualAI said. EqualAI cited a January 2023 study from Cisco, which found 60% of consumers are concerned about how organizations apply and use AI (generative AI was not specified). Cisco found 65% of consumers have lost trust in organizations due to AI practices.
SEE: How to get started using Google Bard (TechRepublic)
“After surveying their particular AI landscape and horizon, it is time to develop AI principles that align with the organization’s values and establish an infrastructure and process… to support these values and ensure they are not impeded by AI use,” the report stated.
A PDF of the complete report can be found here.
TechRepublic has reached out to EqualAI for additional comments about these guidelines; we did not hear back prior to article publication.