Breaking News

Top 3 Emerging Technologies in Artificial Intelligence in the 2020s – Analytics Insight

Artificial Intelligence or popularly known as AI, has been the main driver of bringing disruption to today’s tech world. While its applications like machine learning, neural network, deep learning have already earned huge recognition with their wide-ranging applications and use cases, AI is still in a nascent stage. This means, new developments are simultaneously taking place in this discipline, which can soon transform the AI industry and lead to new possibilities. So, some of the AI technologies today may become obsolete in the next ten years, and others may pave the way to even better versions of themselves. Let us have a look at some of the promising AI technologies of tomorrow.

Generative AI
Recent advances in AI have allowed many companies to develop algorithms and tools to generate artificial 3D and 2D images automatically. These algorithms essentially form Generative AI, which enables machines to use things like text, audio files, and images to create content. The MIT Technology review described generative AI as one of the most promising advances in the world of AI in the past decade. It is poised for the next generation of apps for auto programming, content development, visual arts, and other creative, design, and engineering activities. For instance, NVIDIA has developed a software that can generate new photorealistic faces starting from few pictures of real people. A generative AI-enabled campaign by Malaria Must Die featured David Beckham speaking in 9 different languages to generate awareness for the cause.
It can also be used to provide better customer service, facilitate and speed up check-ins, enable performance monitoring, seamless connectivity, and quality control, and help find new networking opportunities. It also helps in film preservation and colorizations.
Generative AI can also help in healthcare by rendering prosthetic limbs, organic molecules, and other items from scratch when actuated through 3D printing, CRISPR, and other technologies. It can also enable early identification of potential malignancy to more effective treatment plans. For instance, in the case of diabetic retinopathy, generative AI not only offers a pattern-based hypothesis but can also construe the scan and generate content, which can help to inform the physician’s next steps. Even IBM is using this technology for researching on antimicrobial peptide (AMP) to find drugs for COVID-19.
Generative AI also leverages neural networks by exploiting the generative adversarial networks (GANs). GANs share similar functionalities and applications like generative AI, but it is also notorious for being misused to create deepfakes for cybercrimes. GANs are also used in research areas for projecting astronomical simulations, interpreting large data sets and much more.

Federated Learning
According to Google’s research paper titled, Communication-Efficient Learning of Deep Networks from Decentralized Data, federated learning is defined as a learning technique that allows users to collectively reap the benefits of shared models trained from [this] rich data, without the need to centrally store it. In simpler technical parlance, it distributes the machine learning process over to the edge.
Data is an essential key to training machine learning models. This process involves setting up servers at points where models are trained on data via a cloud computing platform. Federated learning brings machine learning models to the data source (or edge nodes) rather than bringing the data to the model. It then links together multiple computational devices into a decentralized system that allows the individual devices that collect data to assist in training the model. This enables devices to collaboratively learn a shared prediction model while keeping all the training data on the individual device itself. This primarily cuts the necessity to move large amounts of data to a central server for training purposes. Thus, it addresses our data privacy woes.
Federated learning is used to improve Siri’s voice recognition capabilities. Google had initially employed federated learning to augment word recommendation in Google’s Android keyboard without uploading the user’s text data to the cloud. According to Google’s blog, when Gboard shows a suggested query, our phone locally stores information about the current context and whether we clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model.
Medical organizations generally are unable to share data due to privacy restrictions. Federated learning can help address this concern through decentralization by removing the need to pool data into a single location and training in multiple iterations at different sites.
Intel recently had teamed up with the University of Pennsylvania Medical School to deploy federated learning across 29 international healthcare and research institutions to identify brain tumors. The team published their findings on Federated Learning and its applications in healthcare in Nature and presented it at their Supercomputing 2020 event last week. According to the published paper, the team achieved 99% accuracy in identifying brain tumors compared to a traditional model.
Intel announced that this breakthrough could help in earlier detection and better outcomes for the more than 80,000 people diagnosed with a brain tumor each year.

Neural Network Compression
AI made rapid progressions in analyzing big data by leveraging deep neural network (DNN). However, the key disadvantage of any neural network is that it is computationally intensive and memory intensive, which makes it difficult to deploy on embedded systems with limited hardware resources. Further, with the increasing size of the DNN for carrying complex computation, the storage needs are also rising. To address these issues, researchers have come with an AI technique called neural network compression.
Generally, a neural network contains far more weights, represented at higher precision than are required for the specific task, which they are trained to perform. If we wish to bring real-time intelligence or boost edge applications, neural network models must be smaller. For compressing the models, researchers rely on the following methods: parameter pruning and sharing, quantization, low-rank factorization, transferred or compact convolutional filters, and knowledge distillation.
Pruning identifies and removes unnecessary weights or connections or parameters, leaving the network with important ones.  Quantization compresses the model by reducing the number of bits that represent each connection. Low-rank factorization leverages matrix decomposition to estimate the informative parameters of the DNNs. Compact convolutions filters help to filter unnecessary weight or parameter space and retain the important ones required to carry out convolution, hence saving the storage space. And knowledge distillation aids in training a more compact neural network to mimic a larger network’s output.
Recently, NVIDIA developed a new type of video compression technology that replaces the traditional video codec with a neural network to reduce video bandwidth drastically. Dubbed as NVIDIA Maxine, this platform uses AI to improve the quality and experience of video-conferencing applications in real-time. NVIDIA claims Maxine can reduce the bandwidth load — down to one-tenth of H.264 using AI video compression. Further, it is also cloud-based, which makes it easier to deploy the solution for everyone.

Share This Article
Do the sharing thingy

Source: https://www.analyticsinsight.net/top-3-emerging-technologies-in-artificial-intelligence-in-the-2020s/