Google seeks authenticity in the age of AI with new content labeling system

by
0 comment

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it’s an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant’s “About this image” feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 8 remaining paragraphs | Comments

Related Posts

Leave a Comment