KPMG report says understanding and explanation about how algorithms work is also the key to success.
Dreamforce 2019: Salesforce wants ethics to be embedded in their business
Salesforce integrates ethics and values ​​in the use of AI, daily decision-making processes and to make contact with customers.
At this point in the transformation of artificial intelligence, it is easier to discover the errors than the successes.
When Apple and Goldman Sachs rolled out the Apple credit card, a high-profile technical founder and applicant described how the team clearly failed in the “explainability” requirement for AI efforts.
Basecamp co-founder and CTO David Heinemeier Hansson complained about the card application process after he and his wife both applied for the card. Her credit limit was much lower than his, although her credit score was better. When Heinemeier Hansson tried to find out why, the first customer service representative literally had no answer:
“The first person said,” I don’t know why, but I swear we don’t discriminate, it’s just the algorithm. ”
The second customer service representative emphasized that the accountability failed:
“The second representative continued on how she could not get access to the real reasoning (again IT IS ALGORITHMIA ONLY).”
How can Apple and Goldman Sachs prove that the credit rating process is fair if nobody has any idea how it works?
Every company must have an AI ethicist to define, document, and explain algorithms, according to “Ethical AI: Five Leading Pillars” by Traci Gusher, Director, Innovation and Enterprise Solutions, Artificial Intelligence, and Todd Lohr, Director, Advisory and a KPMG Digital Lighthouse Network Leader.
For AI to succeed, someone must have these algorithms and be able to explain exactly how the analysis works to team members and customers, as explained by the KPMG clients in the new report. The authors emphasized that AI must be managed and controlled in a meaningful way to gain acceptance from customers and employees.
“AI-driven companies know where and when to use AI,” Gusher said. “They have an AI compass that helps them point in the right direction for governance, accountability and value.”
Appointing a specific owner of AI efforts at company level can also make transparency easier.
This owner must take the lead in explaining to customers how data is used and how it affects the customer experience.
The authors of the report recommend companies to let customers choose whether or not to share data, while at the same time illustrating the benefits of opt-in.
KPMG recommends following these guiding pillars to ensure that AI efforts are ethical:
- Transform the workplace
- Establishment of supervision and governance
- Align cyber security and ethical AI
- Reduce bias
- Increasing transparency
To monitor and remove bias in AI, companies must ensure that algorithms match corporate values ​​and ethics, compliance and security, and quality standards. When bias can have a negative social impact, companies must arrange independent assessments of those models.
Sharpen security concerns
Managers are beginning to understand the security risks surrounding AI, namely “counterparties poisoning algorithms by tampering with training data” that can harm privacy and cause bias.
Seventy-two percent of CEOs said that strong cyber security is vital for building trust in AI systems, compared to just 15% last year. Healthcare and finance leaders are most concerned about ethics in AI, according to research by KPMG conducted with 750 industry insiders in October 2019.
SEE: AI in healthcare: a guide for insiders (free PDF)
The KPMG authors recommend taking these steps to build security in AI:
- Determine who has trained the algorithms
- Follow the origin of the data and any changes therein
- Ensure continuous assessment and confirmation of the effectiveness and accuracy of an algorithm
Best of the week newsletter
Our editors emphasize the TechRepublic articles, galleries and videos that you absolutely should not miss to stay up to date with the latest IT news, innovations and tips.
Fridays
Register today
Also see
A new report from KPMG recommends following these guidelines to increase transparency and reduce bias in artificial intelligence.
Image: KPMG
