At I/O, Google Talks Up ‘Responsible AI’. What’s That All About? – CNET

We know great power requires great responsibility, and that’s especially true in AI. The chatbots and other generative AI tools that have proliferated over the last year and a half can engage you in human-sounding dialogues, write plausible emails and essays, whip up audio that sounds just like real-world politicians and create imaginary photos and videos that are closer and closer to the real thing.

AI Atlas tag

Zooey Liao/CNET

I mean, what’s not to worry, right?

Actually, worrying about AI is a really big deal, whether it’s the potential for misuse by humans or rogue acts by AI itself.

Which is why when a company like Google hosts a splashy event for software developers, it talks about the notion of responsible AI. That came through clearly Tuesday during the two-hour Google I/O keynote presentation, which was heavy on the company’s latest AI developments, especially as they relate to its Gemini chatbot.

While advancements like long context windows, multimodality and personalized agents could help us save time and work more efficiently, they also present opportunities for, say, scam artists to scam… and worse.  

More from Google I/O 2024

To guard against those sorts of bad outcomes, AI makers need to stay vigilant. In the keynote, Google outlined its approach to responsible AI, which includes a combination of automated and human resources. 

“We’re doing a lot of research in this area, including the potential for harm and misuse,” James Manyika, senior vice president of research, technology and society at Google, said during the keynote.

Google’s not alone in talking up the need for AI principles to help balance innovation with safety. ChatGPT maker OpenAI, in announcing its GPT-4o model on Monday, referenced its own guidelines. In its blog post, it noted that “GPT-4o has safety built-in by design” including new systems “to provide guardrails on voice outputs.”

Do a quick, well, Google search and you’ll find that seemingly every company has pages dedicated to responsible or ethical AI. For instance: Microsoft, Meta, Adobe and Anthropic, along with OpenAI and Google itself.

It’s a challenge that will only get more difficult as AI yields increasingly realistic images, videos and audio.

Here’s a look at some of what Google is doing.

Watch this: Everything Google Just Announced at I/O 2024

11:26

AI-assisted red teaming

In addition to standard red teaming, which occurs when ethical hackers are allowed to emulate the tactics of malicious hackers against a company’s systems to identify weaknesses, Google is developing what it calls AI-assisted red teaming.

With this tactic, Google trains AI agents to compete with each other and thereby expand the scope of traditional red-teaming capabilities.

“We’re developing AI models with these capabilities to help address adversarial prompting, and limit problematic outputs,”Manyika said.

Google has also recruited two groups of safety experts from a range of disciplines to provide feedback on its models.

“Both groups help us identify emerging risks from cybersecurity threats to potentially dangerous capabilities in areas like chem bio,” Manyika said.

OpenAI also taps into red teaming and automated and human evaluations in the model training process to help identify risks and build guardrails.

Synth ID

To prevent misuse of its models, including the Imagen 3 image generator and the new Veo video generator, for spreading misinformation, Google is expanding its Synth ID tool, which adds watermarks to AI-generated images and audio, to text and video.

It will open source Synth ID text watermarking “in the coming months.”

Last week, TikTok announced that it would start watermarking AI-generated content

Societal benefits

Google’s responsible AI efforts also focus on how to benefit society, such as helping scientists treat diseases, predict floods and help organizations like the United Nations track progress of the world’s 17 Sustainable Development Goals.

In his presentation, Manyika focused on how generative AI can improve education, such as acting as tutors for students or assistants for teachers.

This includes a Gem, or a custom version of Gemini like ChatGPT’s custom GPTs, called Learning Coach, which provides study guidance, as well as practice and memory techniques, along with a family of Gemini models focused on learning called Lear LM. They will be accessible via Google products like Search, Android, Gemini and YouTube.

These Gems will be available in Gemini “in the coming months,” he said.

Editor’s note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET’s other hands-on reviews, are written by our human team of in-house experts. For more, see CNET’s AI policy and how we test AI.

This post was originally published on Cnet

Share your love

Leave a Reply