Breaking News

Google I/O 2022: Google’s artificial intelligence and machine learning breakthroughs, explained

Google, which held its developer conference I/O 2022 late Wednesday, has doubled down on artificial intelligence (AI) and machine learning (ML) development. It is focusing not only on research, but also product development.

One of Google’s focus areas is making its products, especially those involving communication, more “nuanced and natural”. This includes development and deployment of new language processing models.

Take a look at what the company has announced:

AI Test Kitchen

Best of Express Premium

After launching LaMDA (Language Model for Dialog Applications) last year, which allowed Google Assistant to have more natural conversations, Google has announced LaMDA 2 and the AI Test Kitchen, which is an app that will bring access to this model to users.

The AI Test Kitchen will let users explore these AI features and give them a sense of what LaMDA 2 is capable of.

Google has launched the AI Test Kitchen with three demos — the first, called ‘Imagine It’, allows users to suggest a conversation idea and Google’s language processing model then returns with “imaginative and relevant descriptions” about the idea. The second, called ‘Talk About it’, ensures the language model stays on topic, which can be a challenge. The third model, called ‘List It Out’, will suggest a potential list of to-dos, things to keep in mind or pro-tips for a given task.

Pathways Language Model (PaLM)

PaLM is a new model for natural language processing and AI. According to Google, it is their largest model till date, and trained on 540 billion parameters.

For now, the model can answer Math word problems or explain a joke, thanks to what Google describes as chain-of-thought prompting, which lets it describe multi-step problems as a series of intermediate steps.

One example that was shown with PaLM, was the AI model answering questions in both Bangla and English. For instance, Google and Alphabet CEO Sundar Pichai asked the model about popular pizza toppings in New York City, and the answer appeared in Bangla despite PaLM never having seen parallel sentences in the language.

Google’s hope is to extend these capabilities and techniques to more languages and other complex tasks.

Multisearch on Lens

Google also announced new enhancements to its Lens Multisearch tool, which will allow users to conduct a search with just an image and some words.

“In the Google app, you can search with images and text at the same time – similar to how you might point at something and ask a friend about it,” the company said.

Users will also be able to use a picture or screenshot and add “near me” to see options for local restaurants or retailers that have apparel, home goods, and food, among other things.

With an advancement called “scene exploration”, users will be able to use Multisearch to pan their camera and instantly glean insights about multiple objects in a wider scene.

[embedded content]

Immersive Google Maps

Google announced a more immersive way to use its Maps app. Using computer vision and AI, the company has fused together billions of Street View and aerial images to create a rich, digital model of the world. With the new immersive view, users can experience what a neighbourhood, landmark, restaurant or popular venue is like.

Support for new languages in Google Translate

Google has also added 24 new languages to Translate, including Assamese, Bhojpuri, Konkani, Sanskrit and Mizo. These languages were added using ‘Zero-Shot Machine Translation’, where a machine learning model only sees monolingual text – meaning, it learns to translate into another language without ever seeing an example.

However, the company noted that the technology is not perfect and it would keep improving these models.