Category:

Editor’s Pick

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it’s an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant’s “About this image” feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is “thinking.” Since the company launched its “Strawberry” AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe into how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an “o1” model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1’s raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A supply chain failure that compromises Secure Boot protections on computing devices from across the device-making industry extends to a much larger number of models than previously known, including those used in ATMs, point-of-sale terminals, and voting machines.

The debacle was the result of non-production test platform keys used in hundreds of device models for more than a decade. These cryptographic keys form the root-of-trust anchor between the hardware device and the firmware that runs on it. The test production keys—stamped with phrases such as “DO NOT TRUST” in the certificates—were never intended to be used in production systems. A who’s-who list of device makers—including Acer, Dell, Gigabyte, Intel, Supermicro, Aopen, Foremelife, Fujitsu, HP, and Lenovo—used them anyway.

Medical devices, gaming consoles, ATMs, POS terminals

Platform keys provide the root-of-trust anchor in the form of a cryptographic key embedded into the system firmware. They establish the trust between the platform hardware and the firmware that runs on it. This, in turn, provides the foundation for Secure Boot, an industry standard for cryptographically enforcing security in the pre-boot environment of a device. Built into the UEFI (Unified Extensible Firmware Interface), Secure Boot uses public-key cryptography to block the loading of any code that isn’t signed with a pre-approved digital signature.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards / Mike Kemp via Getty Images)

On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don’t break the law.

Ellison, who briefly became the world’s second-wealthiest person last week when his net worth surpassed Jeff Bezos’ for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.

“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Researchers still don’t know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries.

Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.

Dozens of variants

Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Google Gemini logo. (credit: Google)

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The move brings conversational AI capabilities to a wider audience, allowing users to interact with Gemini through voice commands on their Android devices. That’s notable because competitor OpenAI’s Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google first unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it’s accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI’s responses mid-sentence. Users can choose from several voice options for Gemini’s responses, adding a level of customization to the interaction.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Soon you’ll be able to stream games and video for free on United flights. (credit: United)

United Airlines announced this morning that it is giving its in-flight Internet access an upgrade. It has signed a deal with Starlink to deliver SpaceX’s satellite-based service to all its aircraft, a process that will start in 2025. And the good news for passengers is that the in-flight Wi-Fi will be free of charge.

The flying experience as it relates to consumer technology has come a very long way in the two-and-a-bit decades that Ars has been publishing. At the turn of the century, even having a power socket in your seat was a long shot. Laptop batteries didn’t last that long, either—usually less than the runtime of whatever DVD I hoped to distract myself with, if memory serves.

Bring a spare battery and that might double, but it helped to have a book or magazine to read.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Vlatko Gasparic via Getty Images)

OpenAI finally unveiled its rumored “Strawberry” AI language model on Thursday, claiming significant improvements in what it calls “reasoning” and problem-solving capabilities over previous large language models (LLMs). Formally named “OpenAI o1,” the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and “scientific reasoning.” However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, “There’s a lot of o1 hype on my feed, so I’m worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it’ll only get better. (I’m personally psyched about the model’s potential & trajectory!) what o1 isn’t (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today’s launch—but we’re working to get there!”

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Hard drives, unfortunately, tend to die not with a spectacular and sparkly bang, but with a head-is-stuck whimper. (credit: Getty Images)

One of the things enterprise storage and destruction company Iron Mountain does is handle the archiving of the media industry’s vaults. What it has been seeing lately should be a wake-up call: roughly one-fifth of the hard disk drives dating to the 1990s it was sent are entirely unreadable.

Music industry publication Mix spoke with the people in charge of backing up the entertainment industry. The resulting tale is part explainer on how music is so complicated to archive now, part warning about everyone’s data stored on spinning disks.

“In our line of work, if we discover an inherent problem with a format, it makes sense to let everybody know,” Robert Koszela, global director for studio growth and strategic initiatives at Iron Mountain, told Mix. “It may sound like a sales pitch, but it’s not; it’s a call for action.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail