A new report from Capgemini has revealed that 90% of organizations are aware of at least one instance where an AI system had resulted in ethical issues for their business.
The report, titled “AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust” has found that while digital and AI-enabled interactions with customers are on the rise as customers seek contactless or non-touch interfaces amid the COVID-19 pandemic, systems are still being designed without due concern for ethical issues.
While two-thirds (68%) of consumers expect AI models to be fair and free of bias, Capgemini’s findings show that only 53% of organizations have a leader who is responsible for ethics of AI systems, such as a Chief Ethics Officer, and just 46% of have the ethical implications of their AI systems independently audited.
What’s more, 60% of organizations have attracted legal scrutiny and 22% have faced customer backlash because of these decisions reached by AI systems.
The lacking implementation of ethical AI comes in the face of increased regulatory scrutiny. The European Commission has issued guidelines on the key ethical principles that should be used for designing AI applications, and the US Federal Trade Commission (FTC) in early 2020 called for “transparent AI”. It stated that when an AI-enabled system makes an adverse decision, such as declining an application for a credit card, then the organization should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.
However, while globally 73% of organizations informed users