According to the DataRobot report, nearly half of technical professionals are concerned about AI preferences, but many organizations still use unreliable AI systems
The Einstein Analytics product from Salesforce for financial services and new tools to combat AI bias
Allison Witherspoon from Salesforce spoke to TechRepublic about the new industry-specific Einstein Analytics products and the new Trailhead modules designed to help developers tackle AI bias.
As artificial intelligence (AI) enters companies, many IT professionals begin to express concerns about possible AI preferences in the systems they use.
More about artificial intelligence
A new DataRobot report finds that nearly half (42%) of US and UK AI professionals are “very” to “extremely” concerned about AI bias.
The report, conducted last June, of more than 350 US and UK-based CIOs, CTOs, VPs and IT managers involved in procurement decisions at AI and machine learning (ML), also demonstrated that “brand reputation is compromised” and “customer loss” trust “are the most worrying consequences of AI bias. This led 93% of the respondents to say that they intend to invest more in AI prevention in the coming 12 months.
SEE: The ethical challenges of AI: a guide manual (free PDF) (TechRepublic)
Despite the fact that many organizations view AI as a game changer, many organizations still use unreliable AI systems, said Ted Kwartler, vice president of trusted AI at DataRobot.
He said the survey finding that 42% of executives are very concerned about AI bias is no surprise, “given the high-profile misconceptions organizations have had with AI. Organizations must ensure that AI methods are consistent with their organizational values, “said Kwartler. the many steps required in an AI implementation, so that your training data does not have a hidden bias, helps organizations to remain reactive later in the workflow. “
The DataRobot study found that although most organizations (71%) currently rely on AI to perform up to 19 business functions, 19% use AI to manage as many as 20 to 49 functions and 0% use the technology to address more than 50 functions.
Although managing AI-driven functions within a company can be valuable, it can also present challenges, according to the DataRobot report. “Not all AI is treated equally and without the right knowledge or resources, companies could select or deploy AI in ways that could be more harmful than beneficial.”
The survey found that more than a third (38%) of AI professionals still use black-box AI systems – meaning they have little or no insight into how data entry is used in their AI solutions. This lack of visibility can contribute to respondents’ concerns about AI bias occurring within their organization, DataRobot said.
AI bias occurs because “we make decisions about incomplete data in trusted retrieval systems,” says Sue Feldman, president of the cognitive computer and content analysis consultancy firm Synthexis. ‘Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still fly blind. “
That’s why it’s important to use systems that involve people in the cycle, rather than making decisions in a vacuum, Feldman added, who is also co-founder and director of the Cognitive Computing Consortium. They are “an improvement on fully automatic systems,” she said.
SEE: Management of AI and ML in the company 2019: technical leaders expect more difficulties than previous IT projects (TechRepublic Premium)
How to reduce AI bias
According to Gartner, bias based on race, gender, age or location, and bias based on a specific data structure have long been a risk when training AI models.
In addition, opaque algorithms such as deep learning can include many implicit, highly variable interactions in their predictions that are difficult to interpret, the company said.
By 2023, 75% of large organizations will hire specialists in forensic behavior, privacy and customer confidence to reduce brand and reputation risk, Gartner predicts.
“New tools and skills are needed to help organizations identify these and other potential sources of bias, build confidence in the use of AI models and reduce the brand and reputation risk of companies,” said Jim Hare, a vice president of research at Gartner, in a statement. .
“More and more data and analysis leaders and chief data officers (CDOs) are hiring investigators for forensic and ethical ML investigations,” said Hare.
Gartner said organizations such as Facebook, Google, Bank of America, MassMutual and NASA are already hiring or have already appointed forensic specialists in AI behavior to focus on exposing unwanted prejudices in AI models.
According to McKinsey, if AI wants to reach its potential and increase human confidence in the systems, steps must be taken to minimize bias. They contain:
- Be aware of the contexts in which AI can help correct bias and where there is a high risk for AI to aggravate bias
- Set up processes and methods to test and reduce bias in AI systems
- Conduct fact-based conversations about prejudice in human decisions
- Discover how people and machines can work together best
- Invest more in biased research and make more data available while reducing privacy
- Invest more in diversification of the AI field
The DataRobot survey showed that 83% of all AI professionals say they have established AI guidelines to ensure that AI systems are properly maintained and provide accurate, trusted results to combat AI bias cases. In addition:
- 60% have made warnings to determine when data and results differ from the training data
- 59% measure AI decision factors
- 56% use algorithms to detect and reduce hidden prejudices in the training data
The latter stat surprised Kwartler. “I’m worried that only about half of executives have algorithms to detect hidden bias in training data.”
Cultural differences were also discovered between American and British respondents to the DataRobot study.
Although US respondents are most concerned about emerging bias – which is a bias due to incorrect user-system alignment – British respondents are more concerned with technical bias – or bias resulting from technical constraints, the research.
To improve AI bias prevention efforts, 59% of respondents say they plan to invest in more advanced white box systems, 54% state they hire in internal staff to manage AI trust and 48% says they intend to engage external suppliers to monitor AI trust, according to the study.
The figure of 48% should be higher, says Kwartler. “Organizations need to own and internalize their AI strategy, as this helps them ensure that the AI models are consistent with their values. For each business context and industry, models need to be evaluated before and after implementation to mitigate risk,” he said .
Apart from those AI bias prevention measures, 85% of all global respondents believe that according to the report, AI regulation would be useful to determine what AI bias is and how to prevent it.
Next Big Thing newsletter
Be up to date with smart cities, AI, Internet of Things, VR, autonomous driving, drones, robotics and more of the coolest technical innovations.
Delivered on Wednesday and Friday
Register today
Also see
Image: iStockphoto / PhonlamaiPhoto