Explainable artificial intelligence startup Diveplane Corp. is hoping to make an impact by putting the “humanity” back into AI after closing on a new $25 million round of funding today.
The Series A round was led by Shield Capital and included participation from Calibrate Ventures, L3Harris Technologies and Sigma Defense.
Diveplane is an emerging player in the field of explainable AI, which refers to systems that make it possible for humans to understand and trust the results and output created by machine learning algorithms. Explainable AI is really the process of describing an AI model, its impact, the way it comes to decisions and any potential biases. It’s used to characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making processes.
The more advanced AI becomes, the more humans are challenged to comprehend and explain how models came to a conclusion. This process is often referred to as a “black box” that’s almost impossible to interpret. Indeed, many times not even the engineers or data scientists who created an AI algorithm can explain how it arrives at a specific result.
Given the bad publicity around AI bias, explainable AI is crucial for organizations that want to build trust and confidence in their models. For instance, if a mortgage company is using AI to inform its decisions on whether to accept an application for a loan, it will lose a lot of credibility if it cannot explain why applicants from certain demographic groups are less likely to be accepted. There are other advantages to AI explainability too, such as helping developers to ensure their algorithms are working as expected.
That’s where Diveplane comes in. It has created what it calls an “understandable AI decision system” called Reactor. It allows data scientists and engineers to create decision-making models based on historical data observations that can be augmented with existing operations. With Diveplane, organizations can pinpoint and analyze potentially biased data and remove it from the decision-making process. In this way, it can help companies to automate many repetitive tasks that are necessary to ensure their AI algorithms are working as intended.
Diveplane says its explainable AI offering is designed around the principles of “predict, explain and show” in order to create more confidence that operational decisions are fair and transparent.
“We founded Diveplane with the mission of putting humanity back into AI, and we’re succeeding,” said Diveplane co-founder and Chief Executive Mike Capps. “We’re building trusted partnerships, with a product set that provides a holistic capability for fair and transparent decision making and data privacy. This support adds rocket fuel to our business, so we can build on our successful approach to helping companies innovate with our Reactor platform. ”
Diveplane said the money from today’s round will be used to expand the capabilities of its platform while facilitating targeted growth. “Chris, Mike, and the Diveplane team are building a leading technology platform to employ the power of AI while protecting privacy and explainability,” said Shield Capital Managing Partner Raj Shah.