Breaking News

Rooting out AI bias – Axios

New research offers strategies to prevent algorithms used in business from pushing unethical policies.Why it matters: Machine-learning algorithms are increasingly being deployed in commercial settings. If they are optimized only to seek maximum revenue, they can end up treating customers in unethical ways, putting companies at reputational or even regulatory risk.How it works: In a paper published on Wednesday in Royal Society Open Science, researchers formulated what they call the “Unethical Optimization Principle.”It essentially boils down to the idea that “if there is an advantage to something that will be perceived as unethical, then it is quite likely the machine learning is going to find it,” says Robert MacKay, a mathematician at the University of Warwick and an author of the paper.MacKay uses the example of an algorithm that prices insurance products. If it is optimized only to maximize revenue, it’s likely to treat customers unfairly and even unlawfully, selecting a higher price for users whose names code as non-white.In their paper, MacKay and his colleagues lay out complex mathematics that can help businesses and regulators detect the unethical strategies an algorithm might pursue in a given space and identify how the AI should be modified to prevent that behavior.The big picture: As increasingly sophisticated algorithms take more decisions out of the hand of humans, it becomes even more important for programmers to set initial clear limits.Unfortunately, as a new survey from the data science platform Anaconda demonstrates, while data scientists are increasingly concerned about the ethical implications of their work, 39% of those polled say their team has no plans in place to address fairness or bias.”Businesses using algorithms need to ask questions of ‘ought,’ rather than just ‘can,'” says Peter Wang, Anaconda’s CEO.
Source: https://www.axios.com/ai-bias-machine-learning-2515d1a4-650d-4cd3-8e18-84c29bcacd1e.html