CIO Insider

CIOInsider India Magazine

Separator

The Smell of Healthier Times: Google's New AI Model Helps in Prevention of Deadly Diseases

Separator

Marketing is one of the major users of artificial intelligence (AI) today. The lion's share of credit goes to e-commerce. Personalized marketing enabled by AI has proven to be extremely effective in creating an explosion in the market. Through the growing advancement in AI, in the near future, it may be possible for consumers on the web to buy products by snapping a photo of them. Few companies are experimenting with this idea already. This dynamic, futuristic trajectory of AI makes you wonder: what else can it to? Well, Google, the search engine monopoly, seems to have an interesting answer.

Google’s New Artificial Intelligence Model Could Predict Odor like Humans
Google has developed an artificial intelligence model that can predict odor like humans. The model is structured with a molecule that can smell a substance and is even capable of recognizing unnoticeable smells. Placing and structuring molecules to recognize or predict odor is challenging because smells are senses when molecules stick to some sensory receptors in the nose. The intention is to host over 300 such receptors, making it extremely difficult to draw conclusions about an odor with certainty.

Previously experimentations have been done to explore the interplay of molecules and map to the corresponding smell. In 2019, a graph neural network (GAN) was trained to learn several molecule-smell pairs and fall under specific labels like floral, beefy, or minty.

The model developed by Google researchers has successfully generated a Principal Odor Map (POM) with characteristics of a sensory map. The map of odor closely connected to perception and biology across the animal kingdom opens the scope of much advancement in medical sciences. As we know, insects, including mosquitoes, are drawn to humans by sensing the smell. Researchers are using POM to detect animal olfaction to better respond to the deadly diseases transmitted by mosquitoes and ticks. These less expensive, longer lasting, and safer repellents can reduce the worldwide incidence of diseases like malaria, potentially saving countless lives.

Now the big headache for researchers is about new smells and the molecules that generate them. It related smells back to their origins in evolution and nature. Scientists trust that this method can leverage fresh solutions to obstacles in food and fragrance formulation, environmental quality monitoring, and the detection of human and animal diseases.

Google's Largest AI-Language Model Interpreting Human Command
Google introduced its largest AI model in April this year. Robots are able to understand and interpret human commands through Google’s AI language. Machines typically respond best to very specific demands – open-ended requests can sometimes throw them off and lead to results that users didn't have in mind. However, people learn to interact with robots in a rigid way, like asking questions in a particular manner to get the desired response.

The LLM Say enables task-grounding to determine useful actions for a high-level goal, and the learned affordance functions can give a world-grounding to determine what is possible to execute the plan

Google's latest system, dubbed PaLM-SayCan, however, promises to be smarter. The physical device from Everyday Robots – a startup spun out of Google X, is structured with the cameras for eyes in its head and an arm with a pincer tucked behind its long straight body that sits on top of a set of wheels.

If we want to ask the robot something like: "I just worked out. Can you get me a healthy snack?" Will that nudge into fetching an apple? PaLM-SayCan is an interpretable and general approach to leveraging knowledge from language models that enables a robot to follow high-level textual instructions to perform physically-grounded tasks. PaLM was trained on data scraped from the internet, but instead of spewing open-ended text responses, the system was adapted to generate a list of instructions for the robot to follow.

If we ask: "I spilled my Coke on the table, how would you throw it away and bring me something to help clean?" This makes PaLM understand the question and generate a list of steps the robot can follow to complete the task, like going over to pick up the can, throwing it into a bin, and getting a sponge. Large language models (LLMs) like PaLM, however, don't understand the meaning of anything they say. Because of this, researchers have built separate models using reinforcement learning to ground abstract language into visual representations and actions. That way, the robot learns to associate the word Coke with an image of a fizzy drink can.

PaLM-SayCan also learns so-called affordance functions by a method that ranks the possibility of completing a specific action given objects in its environment. For example, the robot is more likely to pick up a sponge than a vacuum cleaner. The new invention, SayCan, extracts and leverages the knowledge within LLMs in physically-grounded tasks. The LLM Say enables task-grounding to determine useful actions for a high-level goal, and the learned affordance functions can give a world-grounding to determine what is possible to execute the plan. Researchers used reinforcement learning (RL) as a way to learn language-conditioned value functions that provide affordances of what is possible in the world.

In order to safeguard the robots from veering off task, it is trained to select actions from 101 different instructions only. The team of researchers from google trained the robots to adapt to a kitchen – PaLM-SayCan can get snacks & drinks and perform simple cleaning tasks. Experiments show that the LLMs are the first step in getting robots to perform more complex tasks safely given abstract instructions. The experiments on a number of real-world robotic tasks demonstrate the ability to plan and complete long-horizon, abstract, natural language instructions at a high success rate. According to the researchers, PaLM-SayCan's interpretability allows for safe real-world user interaction with robots.



Current Issue
Education In Technology ERA



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...