Lifelong (Machine) Learning –

Surveying the previous decade’s advancements in deep neural networks that have given us once unimaginable technologies like self-driving cars, it can seem like these algorithms are capable of accomplishing any task. But these networks, with their relatively slow gradient-based backpropagation training, are not especially well suited in some cases. Consider, for example, robots that need to interact with objects in the real world in a human-like manner. Robots of this sort should be able to recognize new objects, from different angles and under different lighting conditions, after seeing just a few examples — much like a human can. But for a deep neural network to handle these new cases, a large volume of varied training data is often required, along with the requisite computational resources and time needed to retrain the model.Researchers at the Neuromorphic Computing Lab at Intel, in collaboration with partners in academia, have demonstrated a new method that may help robots to learn in a more human-like way (from outward appearances, at least). They leveraged Intel’s neuromorphic research chip, called Loihi, and a neuronal state machine to create an algorithm that can continually learn to recognize new objects from a limited set of examples (without forgetting any previously learned objects), and can remain open to adapting to new information in the future.Network architecture (📷: E. Hajizada et al.)Neuromorphic chips, like Loihi, contain artificial neurons made of silicon that aim to mimic the structures of biological nervous systems. This architectural paradigm can give these chips orders of magnitude better performance and energy efficiency when compared with traditional computing platforms, which were not designed with machine learning in mind. A spiking neural network was designed to run on Loihi, which consisted of feature extraction layers, a layer for learning instances of objects, and a neuronal state machine to support the learning algorithm, and to enable continuous learning.To test how well their methods work, the team set up a simulated robotic environment in simulation software, using a model of the iCub robot. With the simulated robot standing in front of a table, a variety of 3D objects were rendered on top of it. A total of 19 different, everyday objects were placed within the robot’s field of vision. An event-based camera simulator was then used to generate events from the robot’s point of view. Each object was viewed from 20 different angles, and the robot made five slight “eye” movements to trigger the event-based camera. The data collected using this setup was then put to work in benchmarking the model.When evaluating sets of eight 3D objects, each from 8 different views, the network was found to have achieved a 96.55% testing accuracy rate. When compared with traditional online learning methods, this new technique was observed to perform up to 300 times more energy efficiently. These initial results indicate that the work described by the researchers has the potential to move the field of robotics forward a step with respect to continual learning and interaction with real world environments. Thus far, the work has all been done in a simulated environment, however, so there is still much yet to be done to prove that the methods will translate into successes with physical robots.