It is safe to say that the closest thing next to human intelligence and abilities is artificial intelligence. Powered by its tools in machine learning, deep learning and neural network, there are so many things that existing artificial intelligence models are capable of. However do they dream or have psychedelic hallucinations like humans? Can the generative feature of deep neural networks experience dream like surrealism?
Neural networks are type of machine learning, focused on building trainable systems for pattern recognition and predictive modeling. Here the network is made up of layers—the higher the layer, the more precise the interpretation. Input data feed goes through all the layers, as the output of one layer is fed into the next layer. Just like neuron is the basic unit of the human brain, in a neural network, it is perceptron which forms the essential building block. A perceptron in a neural network accomplishes simple signal processing, and these are then connected into a large mesh network.
Generative Adversarial Network (GAN) is a type of neural network that was first introduced in 2014 by Ian Goodfellow. Its objective is to produce fake images that are as realistic as possible. GANs have disrupted the development of fake images: deepfakes. The ‘deep’ in deepfake is drawn from deep learning. To create deepfakes, neural networks are trained on multiple datasets. These dataset can be textual, audio-visual depending on the type of content we want to generate. With enough training, the neural networks will be able to create numerical representations the new content like a ‘deepfake image’. Next all we have to do is rewire the neural networks to map the image on to the target. Deepfake can also be created using autoencoders, which is a type of unsupervised neural network. In fact, in most of the deepfakes, autoencoders is the primary type of neural network used in their creation.
In 2015, a mysterious photo appeared on Reddit showing a monstrous mutant. This photo was later revealed to be a result of Google artificial neural network. Many pointed out that this inhumanly and scary appearing photo had striking resemblance to what one sees on psychedelic substances such as mushrooms or LSD. Basically, Google engineers decided that instead of asking the software to generate a specific image, they would simply feed it an arbitrary image and then ask it what it saw.
As per an abstract on Popular Science, Google used the artificial neural netowrk to amplify patterns it saw in pictures. Each artificial neural layer works on a different level of abstraction, meaning some picked up edges based on tiny levels of contrast, while others found shapes and colors. They ran this process to accentuate color and form, and then told the network to go buck wild, and keep accentuating anything it recognizes. In the lower levels of network, the results were similar to Van Gogh paintings: images with curving brush strokes, or images with Photoshop filters. After running these images through the higher levels, which recognize full images, like dogs, over and over, leaves transformed into birds and insects and mountain ranges transformed into pagodas and other disturbing hallucinating images.
Few years ago, Google’s AI company – DeepMind was working on a new technology, which allows robots to dream in order to improve their rate of learning.
In a new article published in the scientific journal Neuroscience of Consciousness, researchers demonstrate how “classic” psychedelic drugs such as DMT, LSD, and psilocybin selectively change the function of serotonin receptors in the nervous system. And for this they gave virtual versions of the substances to neural network algorithms to see what happens.
Scientists from Imperial College London and the University of Geneva managed to recreate DMT hallucinations by tinkering around with powerful image-generating neural nets so that their usually-photorealistic outputs became distorted blurs. Surprisingly, the results were a close match to how people have described their DMT trips. As per Michael Schartner, a member of the International Brain Laboratory at Champalimaud Centre for the Unknown in Lisbon, “The process of generating natural images with deep neural networks can be perturbed in visually similar ways and may offer mechanistic insights into its biological counterpart — in addition to offering a tool to illustrate verbal reports of psychedelic experiences.”
The objective behind this was to better uncover the mechanisms behind the trippy visions.
One basic difference between human brain and neural network is that our neurons communicate in multi-directional manner unlike feed forward mechanism of Google’s neural network. Hence, what we see is a combination of visual data and our brain’s best interpretation of that data. This is also why our brain tends to fail in case of optical illusion. Further under the influence of drugs, our ability to perceive visual data is impaired, hence we tend to see psychedelic and morphed images.
While we have found answer to ‘Do Androids Dream of Electric Sheep?’ by Philip K. Dick, an American sci-fi novelist; which is ‘NO!’, as artificial intelligence have bizarre dreams, we are yet to uncover answers about our dreams. Once we achieve that, we can program neural models to produce visual output or deepfakes as we expect. Besides, we may also apparently solve the mystery behind black box decisions.
Share This Article
Do the sharing thingy