October 9, 2023
October 9, 2023

EqualAI Releases 11 Principles for Responsible AI Governance

Advertisements


Stakeholders from underrepresented groups should be involved in the integration of AI into business processes or functions.

The nonprofit organization EqualAI has made public its report on best practices for putting 11 responsible, ethical principles around artificial intelligence in place. EqualAI defines responsible AI as “safe, inclusive and effective for all possible end users.” The principles include privacy, transparency and robustness.

Executives from Google DeepMind, Microsoft, Salesforce, Amazon Web Services, Verizon, the SAS Institute, PepsiCo, customer engagement company LivePerson and aerospace and defense company Northrop Grumman co-authored the report.

The report does not specify generative AI. Instead, its scope includes all “complex AI systems currently being built, acquired and integrated.”

Jump to:

What is the EqualAI Responsible AI Governance Framework?

The EqualAI Responsible AI Governance Framework includes 11 principles:

  • Preservation of privacy.
  • Transparency.
  • Human-Oriented Focus.
  • Respect for Individual Rights and Societal Good.
  • Open innovation.
  • Rewarding robustness.
  • Continuous innovation and review.
  • Enlist employees’ involvement.
  • Prioritize fairness through accountability.
  • Human-in-the-loop (ongoing human oversight during AI decision-making).
  • Professional development.

It also includes six central pillars:

  1. Responsible AI Values and Principles.
  2. Accountability and Clear Lines of Responsibility.
  3. Documentation.
  4. Defined Processes.
  5. Multistakeholder Reviews (including underrepresented and marginalized communities).
  6. Metrics, Monitoring and Reevaluation.

The framework is based on EqualAI’s Responsible AI Badge Program, which is a certification track to help corporate leaders reduce bias in AI.

“At EqualAI, we have found that aligning on AI principles allows organizations to operationalize their values by setting rules and standards to guide decision making related to AI development and use,” said Miriam Vogel, president and CEO of EqualAI, in a press release.

What steps can organizations take to create an effective responsible AI strategy?

According to the report, key steps to adopting an effective, responsible AI strategy include:

  • Securing C-suite or board support.
  • Taking into account feedback from diverse and underrepresented groups.
  • Empowering employees to raise potential concerns.

An organization’s responsible AI framework might be customized to its existing company values and how it already uses AI. The goal isn’t to reach “zero risk,” which isn’t truly possible; instead, companies should work on creating a culture of responsible AI governance. Performance recognition, pay and promotion incentives could be tied to AI risk mitigation efforts.

Why does responsible AI governance matter?

Investing in responsible AI practices is good for business as well as for humanity, EqualAI said. EqualAI cited a January 2023 study from Cisco, which found 60% of consumers are concerned about how organizations apply and use AI (generative AI was not specified). Cisco found 65% of consumers have lost trust in organizations due to AI practices.

SEE: How to get started using Google Bard (TechRepublic) 

“After surveying their particular AI landscape and horizon, it is time to develop AI principles that align with the organization’s values and establish an infrastructure and process… to support these values and ensure they are not impeded by AI use,” the report stated.

A PDF of the complete report can be found here.

TechRepublic has reached out to EqualAI for additional comments about these guidelines; we did not hear back prior to article publication.



Source link

61 Bridge Street
Kingdon
Herts
Top locations – from restaurants and clubs, to galleries, famous places Local Business Directory - Events - Jobs - Classifieds and so much more...
© 2023 All Rights Reserved By StepInto Group Ltd
crossmenu