Breaking News

Here’s What the “Dreams” of Google’s Artificial Intelligence Look Like – Analytics Insight

Google’s innovative DeepDream software is turning AI neural networks inside out to comprehend how computers think.
What if computers had the ability to dream? They can, in reality. Google’s innovative DeepDream software is turning Artificial Intelligence neural networks inside out to comprehend how computers think.
When a bunch of artificial brains at Google began producing surreal images from otherwise standard photos, engineers contrasted what they saw to dreamscapes.  Their image-generation method was termed Inceptionism and the code that powered it was called DeepDream.
Wikipedia says “DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.”
Color scrolls, spinning shapes, stretched faces, swirling eyeballs, and awkward patterns of shadow and light feature in the computer-generated images. The computers seemed to be hallucinating – in an astonishingly human manner. The aim of the project was to see how well a neural network could identify different animals and environments by having the machine explain what is observed.
So, what really is going on in the dreaming neural networks and what does this mean for the future of Artificial intelligence?
The result reveals a lot about where artificial intelligence is headed, as well as why it could be more imaginative, ambiguous, and unpredictable than we’d like.
The Google artificial neural network is modeled after the central nervous system of animals and functions similarly to a computer brain. When engineers feed a picture to the network, the first layer of ‘neurons‘ examines it. This layer then communicates with the next layer, which attempts to represent the image. This process continues 10 to 30 rounds, with each layer defining and alienating main elements until the picture is deduced. The neural network then informs us what the entity is that it has valiantly attempted to identify, often with little progress. This is the method for recognizing images.
After that, the Google team realized they could reverse the procedure. They hoped to learn more about what qualified features the networks knew and didn’t by giving it complete freedom and asking it to interpret and “improve an input picture in such a way as to evoke a specific interpretation.”
What happened next was quite remarkable. The researchers discovered that these neural networks could not only distinguish between different images but that they also had enough knowledge to produce images, culminating in these unexpected computational representations. The network, for example, produced these unusual images in response to the team’s requests for common objects such as insects, bananas, and much more.
According to IFL Science, computers have the ability to see images in objects in a way that artists can only dream of replicating. It sees buildings within clouds, temples in trees, and birds in leaves. Highly detailed elements seem to pop up out of nowhere. This processed image of a cloudy sky proves that Google’s artificial neural network is the champion of finding pictures in a cloudy sky.
This technique, which creates images where there aren’t any, is aptly called ‘inceptionism.’ There is an inceptionism gallery where you can explore the computer’s artwork.
Finally, the designers gave the computer full, free reign over its artwork. The final pieces were beautiful pictures derived from a mechanical mind – what the engineers are calling ‘dreams.’ The ‘blank canvas’ was simply an image of white noise. The computer pulled out patterns from the noise and created dreamscapes: pictures that could only come from an infinite imagination.

Share This Article
Do the sharing thingy

Source: https://www.analyticsinsight.net/heres-what-the-dreams-of-googles-artificial-intelligence-look-like/