CIO Insider

CIOInsider India Magazine

Separator

Google Develops a Prototype to Detect Offensive Content

CIO Insider Team | Friday, 27 October, 2023
Separator

According to reports, Google has developed a prototype that uses recent large language models, or LLMs, to help detect large amounts of offensive content.

LLMs are a type of artificial intelligence that can produce and understand human language.

Amanda Storey, senior director, trust and safety says, “Using LLMs, our aim is to be able to rapidly build and train a model in a matter of days - instead of weeks or months - to find specific kinds of abuse on our products.”
Google is still testing these new technologies, but the prototypes have shown impressive results so far.

Storey says, “It shows promise for a major advance in our effort to proactively protect our users especially from new, and emerging risks.”

However, the company did not specify which of its many LLM firms it uses to find misstatements.

We are constantly developing the tools, policies and techniques we use to detect content abuse. Artificial intelligence holds great promise for increasing abuse detection across all our platforms.

To help users find high-quality information about what they see online, Google has also rolled out the about this image fact-check tool to English language users globally in Search

Google has announced that it is taking several steps to reduce the risk of misinformation and promote reliable information in generative AI products.

The company has also categorically told developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit the generation of restricted content like child sexual abuse material (CSAM) and content that enables deceptive behavior.

To help users find high-quality information about what they see online, Google has also rolled out the about this image fact-check tool to English language users globally in Search.



Current Issue
Powering Tomorrow : The Digital Age of Semiconductors



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...