Category:

Editor’s Pick

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is “thinking.” Since the company launched its “Strawberry” AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe into how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an “o1” model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1’s raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A supply chain failure that compromises Secure Boot protections on computing devices from across the device-making industry extends to a much larger number of models than previously known, including those used in ATMs, point-of-sale terminals, and voting machines.

The debacle was the result of non-production test platform keys used in hundreds of device models for more than a decade. These cryptographic keys form the root-of-trust anchor between the hardware device and the firmware that runs on it. The test production keys—stamped with phrases such as “DO NOT TRUST” in the certificates—were never intended to be used in production systems. A who’s-who list of device makers—including Acer, Dell, Gigabyte, Intel, Supermicro, Aopen, Foremelife, Fujitsu, HP, and Lenovo—used them anyway.

Medical devices, gaming consoles, ATMs, POS terminals

Platform keys provide the root-of-trust anchor in the form of a cryptographic key embedded into the system firmware. They establish the trust between the platform hardware and the firmware that runs on it. This, in turn, provides the foundation for Secure Boot, an industry standard for cryptographically enforcing security in the pre-boot environment of a device. Built into the UEFI (Unified Extensible Firmware Interface), Secure Boot uses public-key cryptography to block the loading of any code that isn’t signed with a pre-approved digital signature.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards / Mike Kemp via Getty Images)

On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don’t break the law.

Ellison, who briefly became the world’s second-wealthiest person last week when his net worth surpassed Jeff Bezos’ for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.

“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Researchers still don’t know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries.

Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.

Dozens of variants

Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Google Gemini logo. (credit: Google)

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The move brings conversational AI capabilities to a wider audience, allowing users to interact with Gemini through voice commands on their Android devices. That’s notable because competitor OpenAI’s Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google first unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it’s accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI’s responses mid-sentence. Users can choose from several voice options for Gemini’s responses, adding a level of customization to the interaction.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Soon you’ll be able to stream games and video for free on United flights. (credit: United)

United Airlines announced this morning that it is giving its in-flight Internet access an upgrade. It has signed a deal with Starlink to deliver SpaceX’s satellite-based service to all its aircraft, a process that will start in 2025. And the good news for passengers is that the in-flight Wi-Fi will be free of charge.

The flying experience as it relates to consumer technology has come a very long way in the two-and-a-bit decades that Ars has been publishing. At the turn of the century, even having a power socket in your seat was a long shot. Laptop batteries didn’t last that long, either—usually less than the runtime of whatever DVD I hoped to distract myself with, if memory serves.

Bring a spare battery and that might double, but it helped to have a book or magazine to read.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Vlatko Gasparic via Getty Images)

OpenAI finally unveiled its rumored “Strawberry” AI language model on Thursday, claiming significant improvements in what it calls “reasoning” and problem-solving capabilities over previous large language models (LLMs). Formally named “OpenAI o1,” the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and “scientific reasoning.” However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, “There’s a lot of o1 hype on my feed, so I’m worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it’ll only get better. (I’m personally psyched about the model’s potential & trajectory!) what o1 isn’t (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today’s launch—but we’re working to get there!”

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Hard drives, unfortunately, tend to die not with a spectacular and sparkly bang, but with a head-is-stuck whimper. (credit: Getty Images)

One of the things enterprise storage and destruction company Iron Mountain does is handle the archiving of the media industry’s vaults. What it has been seeing lately should be a wake-up call: roughly one-fifth of the hard disk drives dating to the 1990s it was sent are entirely unreadable.

Music industry publication Mix spoke with the people in charge of backing up the entertainment industry. The resulting tale is part explainer on how music is so complicated to archive now, part warning about everyone’s data stored on spinning disks.

“In our line of work, if we discover an inherent problem with a format, it makes sense to let everybody know,” Robert Koszela, global director for studio growth and strategic initiatives at Iron Mountain, told Mix. “It may sound like a sales pitch, but it’s not; it’s a call for action.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image featuring my late father’s handwriting. (credit: Benj Edwards / Flux)

Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.

Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone’s handwriting very accurately if specially trained to do so. I decided to experiment with the technique using written journals my dad left behind. The results astounded me and raised deep questions about ethics, the authenticity of media artifacts, and the personal meaning behind handwriting itself.

Beyond that, I’m also happy that I get to see my dad’s handwriting again. Captured by a neural network, part of him will live on in a dynamic way that was impossible a decade ago. It’s been a while since he died, and I am no longer grieving. From my perspective, this is a celebration of something great about my dad—reviving the distinct way he wrote and what that conveys about who he was.

Read 43 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Microsoft has updated a key cryptographic library with two new encryption algorithms designed to withstand attacks from quantum computers.

The updates were made last week to SymCrypt, a core cryptographic code library for handing cryptographic functions in Windows and Linux. The library, started in 2006, provides operations and algorithms developers can use to safely implement secure encryption, decryption, signing, verification, hashing, and key exchange in the apps they create. The library supports federal certification requirements for cryptographic modules used in some governmental environments.

Massive overhaul underway

Despite the name, SymCrypt supports both symmetric and asymmetric algorithms. It’s the main cryptographic library Microsoft uses in products and services including Azure, Microsoft 365, all supported versions of Windows, Azure Stack HCI, and Azure Linux. The library provides cryptographic security used in email security, cloud storage, web browsing, remote access, and device management. Microsoft documented the update in a post on Monday.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail