Artificial Intelligence: MIT-IBM lab develops tool to distinguish minute sounds using visual cues! Details – The Financial Express

  • Lauren
  • July 1, 2020
  • Comments Off on Artificial Intelligence: MIT-IBM lab develops tool to distinguish minute sounds using visual cues! Details – The Financial Express

The earliest work in this was the PixelPlayer. (Image: MIT-IBM Lab researchers)Artificial Intelligence for music lovers: MIT-IBM Watson lab develops AI to identify music by studying the musicians’ body! In a statement, the Massachusetts Institute of Technology (MIT) said that at the MIT-IBM Watson AI Lab have leveraged artificial intelligence to develop a tool that can use virtual eyes and ears of a computer so that the sounds that are similar for the major part can also be differentiated. As per the statement, the tool would be an improvement of earlier models and tries to match the tempo of the individual points with the musicians’ movements using their skeletal points. This would allow listeners to isolate every single instrument from the others.The statement added that this tool can be potentially used in a wide spectrum of applications like sound mixing, where it can be used for turning up the volume of a particular instrument during recording, or in video conferencing calls to reduce the confusion that leads people to speak over each other.Related NewsThe statement quoted lead author IBM’s Chuang Gan as saying that important information about the structure of humans is provided by body keypoints, and that is what has been used by the team to improve the ability of the AI tool to listen and distinguish between sounds.The statement adds that in research like this, authors use the synchronised tracks of audio and video to recreate the way humans learn. It added that an Artificial Intelligence system learning through multiple sense modalities might be able to learn quicker, even with fewer data and without the need for humans to label every real-world representation. Study co-senior author and MIT Professor Antonio Torralba was quoted as saying that humans learn from all of their senses and multi-sensory processing precedes embodied intelligence and AI systems which can perform complicated tasks.MIT AI system for sound distinction: How it worksThe tool developed by the MIT-IBM lab is an improvement on an earlier work which “harnessed motion cues in sequences of images”, the statement said.The earliest work in this was the PixelPlayer, which let users click on a particular instrument in a video of a concert and allowed users to increase or decrease the volume of that instrument. An updated version of the tool let users distinguish between two violins in a duet arrangement by matching the movement of each musician with the tempo of their part.Now, the latest technology has added keypoints data to this. Keypoints data is preferred by sports analysts so that they can track the performance of the athletes. With this data in music, the tool can extract the finer grained motion data so that nearly identical sounds can also be distinguished.The study has brought to light the need of visual cues in ensuring that computers have a better ear and the importance of training them to use sound cues to have sharper eyes. The statement added that while in this study, the researchers have used visual cues to distinguish sounds, previous studies have used sounds to distinguish between animals or objects that look similar.Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.