Breaking News

Unravelling the Breakthrough in Energy Efficient Artificial Intelligence – Analytics Insight

Spiking Neural Networks requires less frequency while communicating, and involves minimum calculations for performing the task.
The neural networks are the brain of Artificial Intelligence. Just like the neurons in the human body, these neural networks precede every process of AI. The modern neural networks are efficient in performing tasks but are lacks energy efficiency. That’s why performing tasks like speech recognition, ECG and gesture recognition entails consumption of extensive energy. Moreover, with the amount of energy these neural networks consume, running AI applications on mobile phones or chips or smartwatch without cloud-intervention becomes a challenging task. Despite their effectiveness, this becomes a limiting factor in their use.
To outsmart such challenge, the researchers at Centrum Wiskunde & Informatica (CWI) the Dutch national research centre for mathematics and computer science, collaborated with the IMEC/Holst Research Center from Eindhoven, Netherlands. They formulated a mathematical breakthrough that would assist in energy-efficient neural networks. The researchers have developed a deep learning algorithm known as Spiking Neural Networks that requires less frequency while communicating and involves minimum calculations for performing the task.
The principal investigator Sander Bohté of this research project has said: “The combination of these two breakthroughs make AI algorithms a thousand times more energy efficient in comparison with standard neural networks, and a factor hundred more energy efficient than current state-of-the-art neural networks”.
The outcome of this mathematical breakthrough is published in a paper called, ‘Effective and Efficient Computation with Multiple-Timescale Spiking Recurrent Neural Network’. In this article, we will observe the key points of the research paper.

Spiking Neural Networks
Unlike Artificial Neural Networks (ANN’s), the Spiking Neural Networks (SNN’s), are at an early stage of development. The SNN’s are derived from models that capture the behaviour of biological Neurons. Training the SNN’s by using the traditional methods of backpropagation has been challenging for the researchers, as the SNN’s generates discontinuous gradients. To combat this problem, researchers have developed compact recurrent networks of spiking neurons (SRNNs), which are trained using Surrogate gradient. This will help to directly apply back-propagation with auto-differentiation in a well-developed modern deep learning framework with limited energy energy-consumption and timescales.
The main focus of the researchers was in training one or more recurrent layer of SNN. The Spiking Recurrent Neural Networks involved two types of spiking neurons namely Leaky-Integrate-and-Fire (LIF) Neurons, and Adaptive Spiking Neurons, which levels of biological realism for interpretability and reduced computational cost. Moreover, the LIF spiking neuron integrates input current in a leaky fashion and fires an action potential when its membrane potential crosses a fixed threshold. The output of SNN is related to the interpretation of the Spiking Neuron’s behaviour, both in terms of membrane potential trace and spike history.

Experiments
The experiment included several tasks with the above-mentioned method. This includes encoding and decoding, ECG, S-MINST and PS-MINST (datasets in the seminal computer vision classification task), and Spoken Heidelberg Digits (SHD), a dataset that was developed specifically for benchmarking spiking neural networks.

Conclusions
The researchers found that the training of SRNN using a surrogate gradient with back-propagation saved energy 243 times more, as compared to the traditional method. As compared to the classical RNNs, the energy consumption of SRNN was reduced by 1900 times. The researchers cite that using more complex adaptive spiking neurons was a key to achieving these results, in particular by also training the individual time-constants of these spiking neurons, also using BPTT.

Share This Article
Do the sharing thingy

Source: https://www.analyticsinsight.net/unravelling-breakthrough-energy-efficient-artificial-intelligence/