In today’s society, artificial intelligence (A.I.) is mostly used for good. But what if it was not?
This is the question researchers from Collaborations Pharmaceuticals asked themselves when conducting experiments using an A.I. that was built to search for helpful drugs.
They, therefore, tweaked this A.I. to look for chemical weapons, and impressively enough the machine learning algorithm found 40,000 options in just six hours, according to a paper published this month in the journal Nature Machine Intelligence.
“The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery,” wrote the researchers in their paper.
“We have spent decades using computers and A.I. to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life.”
The researchers said that even their work on Ebola and neurotoxins, which could have raised concerns about the potential negative implications of their machine learning models, had not set their alarm bells ringing. They were blissfully unaware of the damage they cond inflict.
How did their experiment work?
Collaborations Pharmaceuticals had published computational machine learning models for toxicity prediction. All the researchers had to do was adapt their methodology to seek out, rather than weed out toxicity and what they got was a thought exercise that evolved into a computational proof of concept for making biochemical weapons.
The experiment is a clear indication of why we need to monitor A.I. models more closely and really think about the consequences of our work.
The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally. The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies. Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.