By adding a few pixels (highlighted in red) to a legitimate check, fraudsters can trick artificial intelligence models into mistaking a $401 check for one worth $701. Undetected, the exploit could lead to large-scale financial fraud.
Yaron Singer climbed the tenure track ladder to a full professorship at Harvard in seven years, fueled by his work on adversarial machine learning, a way to fool artificial intelligence models using misleading data. Now, Singer’s startup, Robust Intelligence, which he formed with a former Ph.D. advisee and two former students, is emerging from stealth to take his research to market.
This year, artificial intelligence is set to account for $50 billion in corporate spending, though companies are still figuring out how to implement the technology into their business processes. Companies are still figuring out, too, how to protect their good AI from bad AI, like an algorithmically generated voice deepfake that can spoof voice authentication systems.
“In the early days of the internet, it was designed like everybody’s a good actor. Then people started to build firewalls because they discovered that not everybody was,” says Bill Coughran, former senior vice president of engineering at Google. “We’re seeing signs of the same thing happening with these machine learning systems. Where there’s money, bad actors tend to come in.”
Enter Robust Intelligence, a new startup led by CEO Singer with a platform that the company says is trained to detect more than 100 types of adversarial attacks. Though its founders and most of the team hold a Cambridge pedigree, the startup has established headquarters in San Francisco and announced Wednesday that it had raised $14 million in a seed and Series A round led by Sequoia. Coughran, now a partner at the venture firm, is the lead investor on the fundraise, which also comes with participation from Engineering Capital and Harpoon Ventures.
Robust Intelligence CEO Yaron Singer is taking a leave from Harvard, where he is a professor of computer science and applied mathematics.
Singer followed his Ph.D. in computer science from the University of California at Berkeley, by joining Google as a postdoctoral researcher in 2011. He spent two years working on algorithms and machine-learning models to make the tech giant’s products run faster, and saw how easily AI could go off the rails with bad data.
“Once you start seeing these vulnerabilities, it gets really, really scary, especially if we think about how much we want to use artificial intelligence to automate our decisions,” he says.
Fraudsters and other bad actors can exploit the relative inflexibility of artificial intelligence models in processing unfamiliar data. For example, Singer says, a check for $401 can be manipulated by adding a few pixels that are imperceptible to the human eye yet cause the AI model to read the check erroneously as $701. “If fraudsters get their hands on checks, they can hack into these apps and start doing this at scale,” Singer says. Similar modifications to data inputs can lead to fraudulent financial transactions, as well as spoofed voice or facial recognition.
In 2013, upon taking an assistant professor position at Harvard, Singer decided to focus his research on devising mechanisms to secure AI models. Robust Intelligence comes from nearly a decade in the lab for Singer, during which time he worked with three Harvard pupils who would become his cofounders: Eric Balkanski, a Ph.D. student advised by Singer; Alexander Rilee, a graduate student; and undergraduate Kojin Oshiba, who coauthored academic papers with the professor. Across 25 papers, Singer’s team broke ground on designing algorithms to detect misleading or fraudulent data, and helped bring the issue to government attention, even receiving an early Darpa grant to conduct its research. Rilee and Oshiba remain involved with the day-to-day activities at Robust, the former on government and go-to-market, and the latter on security, technology and product development.
Robust Intelligence is launching with two products, an AI firewall and a “red team” offering, in which Robust functions like an adversarial attacker. The firewall works by wrapping around an organization’s existing AI model to scan for contaminated data via Robust’s algorithms. The other product, called Rime (or “Robust Intelligence Machine Engine”), performs a stress test on a customer’s AI model by inputting basic mistakes and deliberately launching adversarial attacks on the model to see how it holds up.
The startup is currently working with about ten customers, says Singer, including a major financial institution and a leading payment processor, though Robust will not name any names due to confidentiality. Launching out of stealth, Singer hopes to gain more customers as well as double the size of the team, which currently stands at 15 employees. Singer, who is on leave from Harvard, is sheepish about his future in academia, but says he is focused on his CEO role in San Francisco at the moment.
“For me, I’ve climbed the mountain of tenure at Harvard, but now I think we’ve found an even higher mountain, and that mountain is securing artificial intelligence,” he says.