Breaking News

Machine learning gives us a dog’s-eye view – Cosmos

Daisy takes her place in the fMRI scanner. Her ears are taped to hold in ear plugs that muffle the noise. Credit: Emory Canine Cognitive Neuroscience Lab.

Dog’s minds are being read! Sort of.

Researchers have used fMRI (functional magnetic resonance imaging) scans of dogs’ brains and a machine learning tool to reconstruct what the pooch is seeing. The results suggest that dogs are more interested in what is happening than who or what is involved.

The results of the experiment conducted at Emory University in Georgia in the US are published in the Journal of Visualized Experiments.

Two unrestrained dogs were shown three 30-minute videos. The fMRI neural data was recorded, and a machine-learning algorithm employed to analyse the patterns in the scans.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” says Gregory Berns, professor of psychology at Emory. “The fact that we are able to do that is remarkable.”

Read more: Cockatoos teach each other to open bins and humans are sharing strategies to stop them

Using fMRIs to study perception has recently been developed in humans and only a few other species including some primates.

“While our work is based on just two dogs it offers proof of concept that these methods work on canines,” says lead author Erin Phillips, from Scotland’s University of St. Andrews who conducted the research as a specialist in Berns’s Canine Cognitive Neuroscience Lab. “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”

Machine learning, interestingly enough, is technology which aims to mimic the neural networks in our own brains by recognising patterns and analysing huge amounts of data.

The technology “reads minds” by detecting patterns within the brain data which can be associated with what is playing in the video.

Attaching a video recorder selfie stick placed at dog eye level, the researchers filmed relatable scenes for the canine audience.

Read more: How we speak matters to animals

Get an update of science stories delivered straight to your inbox.

Recorded activities included dogs being petted by and receiving treats from people.

Scenes with dogs showed them sniffing, playing, eating or walking. Other objects and animals included in the scenes included cars, bikes, scooters, cats and deer, as well as people sitting, hugging, kissing, offering a toy to the camera and eating.

Time stamps on the videos helped classify them into objects (such as dog, car, human, cat) and actions (like sniffing, eating, walking).

Only two dogs exhibited the patience to sit through the feature-length film. For comparison, two humans also underwent the same experiment. Both species, presumably, were coaxed with treats and belly pats.

Machine-learning algorithm Ivis was applied to the data. Ivis was first trained on the human subjects and the model was 99% accurate in mapping the brain data onto both the object and action classifiers.

In the case of the dogs, however, the model did not work for the object-based classifiers. It was, however, between 75 and 88% accurate in decoding the action classifiers in the dog fMRI scans.

Bhubo, shown with his owner Ashwin, prepares for his video-watching session in an fMRI scanner. The dog’s ears are taped to hold in ear plugs that muffle the noise of the fMRI scanner. Credit: Emory Canine Cognitive Neuroscience Lab.

“We humans are very object oriented,” says Berns. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

Dogs see only in shades of blue and yellow but have a slightly higher density of vision receptors designed for detecting motion.

“It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” Berns adds. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”

Philips believes understanding how animals perceive the world is important in her own research into how predator reintroduction in Mozambique may impact ecosystems.

“Historically, there hasn’t been much overlap in computer science and ecology,” she says. “But machine learning is a growing field that is starting to find broader applications, including in ecology.”

Evrim Yazgin Evrim Yazgin has a Bachelor of Science majoring in mathematical physics and a Master of Science in physics, both from the University of Melbourne.

Read science facts, not fiction… There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.

Source: https://cosmosmagazine.com/technology/machine-learning-dog-see/