Building AI with ethics is a pertinent problem now more than ever as AI is being applied to more sectors. Companies are not only using AI to recommend the next product to us, they are also using it in areas that are risk-sensitive. The extent to which machine learning is used in safety-critical applications today has made the issues of ethical AI even bigger.
Back in 1976, we did not have machines and software making a decision for us. Neither were their bots who decided if I should be given a loan or not. It has been so close to 6 decades that the famous trolley dilemma, an approach given by the philosopher Philippa Foot, is still unanswered. We have not been able to solve this dilemma from a human perspective, how do we expect machines to understand this?
Praveen Prakash, Co-Founder, and CTO, Simbo.AI feels that the more AI matures, the more our everyday lives are impacted by it.
“Growing complexity of systems and processes in business and governance, as well as the growing volume of our personal online and offline interactions – they all need AI solutions for better management. The ethics in AI, therefore, are hugely important,” he said.
While the answer to ‘What is ethical’ varies for every industry, in basic it leads to the aspects of privacy, morality, transparency, security, solidarity. The ethics for AI include the purpose of AI’s deployment (healthcare or warfare), and the fairness in the AI’s decision-making.
The aspect of deploying AI rests with the organization or government putting the technology to use. AI is a tool just like any other – it is not inherently good or bad – it is the actors and their intents that matter here. AI is helping in healthcare and governance to improve people’s lives. It is also being used online for cheating, forgery, sowing discord, as well as for advanced offensive weaponry.
But we need to make the AI and ML models more ethical for people to trust it. While AI and ML are being used to bridge the gap in many sectors, we don’t trust these models enough to give them power to decide about life and death.
“Covid has become a raging issue for the last few years. I am not sure if ML was used to make decisions. There have been peripheral issues where ML was used but not in the cases where it involved a risk of life. Imagine when the world was going through such a crisis, ML which is widely spoken of was not used. Meaning we do not yet trust AI and ML when it comes to making decisions for life,” said Vineeth N Balasubramanian, Head, Department of Artificial Intelligence, Indian Institute of Technology, Hyderabad.
As AI decisions influence and impact people’s lives at scale, it is crucial that organizations take a proactive approach to designing AI responsibly. Architecting and deploying AI models that are trustworthy, fair and explainable is key.
The conversation about ethics and responsible AI is still evolving and nobody has a definitive answer to how to move ahead. However, there have always been some best practices and things to keep in mind while designing and working with AI and ML.
“Organizations need to adopt proven qualitative and quantitative techniques to assess potential risks and mitigate bias in AI models. Deploying the right set of tools and establishing practices to thoroughly and continuously investigate sources of bias and understand the trade-offs and impacts of fairness decisions is critical,” said Mahesh Zurale, Senior Managing Director, Lead – Advanced Technology Centers in India, Accenture.