Category:

Editor’s Pick

Enlarge / Joaquin Phoenix talking with AI in Her (2013). (credit: Warner Bros.)

In 2013, Spike Jonze’s Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT’s recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.

In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix’s character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016. In reality, ChatGPT isn’t as situationally aware as Samantha was in the film, and OpenAI has done enough conditioning on ChatGPT to keep conversations from getting too intimate or personal. But that hasn’t stopped people from having long talks with the AI assistant to pass the time.

Last week, we related a story in which AI researcher Simon Willison spent hours talking to ChatGPT. “I had an hourlong conversation while walking my dog the other day,” he told Ars for that report. “At one point, I thought I’d turned it off, and I saw a pelican, and I said to my dog, ‘Oh, wow, a pelican!’ And my AirPod went, ‘A pelican, huh? That’s so exciting for you! What’s it doing?’ I’ve never felt so deeply like I’m living out the first ten minutes of some dystopian sci-fi movie.”

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Private Wi-Fi address setting on an iPhone. (credit: Apple)

Three years ago, Apple introduced a privacy-enhancing feature that hid the Wi-Fi address of iPhones and iPads when they joined a network. On Wednesday, the world learned that the feature has never worked as advertised. Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

The problem is that a Wi-Fi media access control address—typically called a media access control address or simply a MAC—can be used to track individuals from network to network, in much the way a license plate number can be used to track a vehicle as it moves around a city. Case in point: In 2013, a researcher unveiled a proof-of-concept device that logged the MAC of all devices it came into contact with. The idea was to distribute lots of them throughout a neighborhood or city and build a profile of iPhone users, including the social media sites they visited and the many locations they visited each day.

As I wrote at the time:

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A relentless team of pro-Russia hackers has been exploiting a zero-day vulnerability in widely used webmail software in attacks targeting governmental entities and a think tank, all in Europe, researchers from security firm ESET said on Wednesday.

The previously unknown vulnerability resulted from a critical cross-site scripting error in Roundcube, a server application used by more than 1,000 webmail services and millions of their end users. Members of a pro-Russia and Belarus hacking group tracked as Winter Vivern used the XSS bug to inject JavaScript into the Roundcube server application. The injection was triggered simply by viewing a malicious email, which caused the server to send emails from selected targets to a server controlled by the threat actor.

No manual interaction required

“In summary, by sending a specially crafted email message, attackers are able to load arbitrary JavaScript code in the context of the Roundcube user’s browser window,” ESET researcher Matthieu Faou wrote. “No manual interaction other than viewing the message in a web browser is required.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Friday, a team of researchers at the University of Chicago released a research paper outlining “Nightshade,” a data poisoning technique aimed at disrupting the training process for AI models, reports MIT Technology Review and VentureBeat. The goal is to help visual artists and publishers protect their work from being used to train generative AI image synthesis models, such as Midjourney, DALL-E 3, and Stable Diffusion.

The open source “poison pill” tool (as the University of Chicago’s press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model’s training process. Many image synthesis models, with notable exceptions of those from Adobe and Getty Images, largely use data sets of images scraped from the web without artist permission, which includes copyrighted material. (OpenAI licenses some of its DALL-E training images from Shutterstock.)

AI researchers’ reliance on commandeered data scraped from the web, which is seen as ethically fraught by many, has also been key to the recent explosion in generative AI capability. It took an entire Internet of images with annotations (through captions, alt text, and metadata) created by millions of people to create a data set with enough variety to create Stable Diffusion, for example. It would be impractical to hire people to annotate hundreds of millions of images from the standpoint of both cost and time. Those with access to existing large image databases (such as Getty and Shutterstock) are at an advantage when using licensed training data.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A section of Apple’s repair manual for the M2 MacBook Air from 2022. Apple already offers customers some repair manuals and parts through its Self-Service Repair program. (credit: Apple)

Right-to-repair advocates have long stated that passing repair laws in individual states was worth the uphill battle. Once enough states demanded that manufacturers make parts, repair guides, and diagnostic tools available, few companies would want to differentiate their offerings and policies and would instead pivot to national availability.

On Tuesday, Apple did exactly that. Following the passage of California’s repair bill that Apple supported, requiring seven years of parts, specialty tools, and repair manual availability, Apple announced Tuesday that it would back a similar bill on a federal level. It would also make its parts, tools, and repair documentation available to both non-affiliated repair shops and individual customers, “at fair and reasonable prices.”

“We intend to honor California’s new repair provisions across the United States,” said Brian Naumann, Apple’s vice president for service and operation management, at a White House event Tuesday.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Kim et al.)

Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.

iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a wide corpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.

Exploiting WebKit on Apple silicon

The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.

Read 17 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A 2020 file photo of a Starship Technologies food delivery robot. Food is stored inside the robot’s housing during transportation and opened upon delivery. (credit: Leon Neal/Getty Images)

On Tuesday, officials at Oregon State University issued a warning on social media about a bomb threat concerning Starship Technologies food delivery robots, autonomous wheeled drones that deliver food orders stored within a built-in container. By 7 pm local time, a suspect had been arrested in the prank, and officials declared there had been no bombs hidden within the robots.

“Bomb Threat in Starship food delivery robots,” reads the 12:20 pm initial X post from OSU. “Do not open robots. Avoid all robots until further notice.” In follow-up posts, OSU officials said they were “remotely isolating robots in a safe location” for investigation by a technician. By 3:54 pm local time, experts had cleared the robots and promised they would be “back in service” by 4 pm.

In response, Starship Technologies provided this statement to the press: “A student at Oregon State University sent a bomb threat, via social media, that involved Starship’s robots on the campus. While the student has subsequently stated this is a joke and a prank, Starship suspended the service. Safety is of the utmost importance to Starship and we are cooperating with law enforcement and the university during this investigation.”

Read 2 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A press photo of the Nvidia H100 Tensor Core GPU. (credit: Nvidia)

On Tuesday, chip designer Nvidia announced in an SEC filing that new US export restrictions on its high-end AI GPU chips to China are now in effect sooner than expected, according to a report from Reuters. The curbs were initially scheduled to take effect 30 days after their announcement on October 17 and are designed to prevent China, Iran, and Russia from acquiring advanced AI chips.

The banned chips are advanced graphics processing units (GPUs) that are commonly used for training and running deep learning AI applications similar to ChatGPT and AI image generators, among other uses. GPUs are well-suited for neural networks because their massively parallel architecture performs the necessary matrix multiplications involved in running neural networks faster than conventional processors.

The Biden administration initially announced an advanced AI chip export ban in September 2022, and in reaction, Nvidia designed and released new chips, the A800 and H800, to comply with those export rules for the Chinese market. In November 2022, Nvidia told The Verge that the A800 “meets the US Government’s clear test for reduced export control and cannot be programmed to exceed it.” However, the new curbs enacted Monday specifically halt the exports of these modified Nvidia AI chips. The Nvidia A100, H100, and L40S chips are also included in the export restrictions.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: 1Password)

1Password, a password manager used by millions of people and more than 100,000 businesses, said it detected suspicious activity on a company account provided by Okta, the identity and authentication service that disclosed a breach on Friday.

“On September 29, we detected suspicious activity on our Okta instance that we use to manage our employee-facing apps,” 1Password CTO Pedro Canahuati wrote in an email. “We immediately terminated the activity, investigated, and found no compromise of user data or other sensitive systems, either employee-facing or user-facing.”

Since then, Canahuati said, his company had been working with Okta to determine the means that the unknown attacker used to access the account. On Friday, investigators confirmed it resulted from a breach Okta reported hitting its customer support management system.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images / Benj Edwards)

On Wednesday, Stanford University researchers issued a report on major AI models and found them greatly lacking in transparency, reports Reuters. The report, called “The Foundation Model Transparency Index,” examined models (such as GPT-4) created by OpenAI, Google, Meta, Anthropic, and others. It aims to shed light on the data and human labor used in training the models, calling for increased disclosure from companies.

Foundation models refer to AI systems trained on large datasets capable of performing tasks, from writing to generating images. They’ve become key to the rise of generative AI technology, particularly since the launch of OpenAI’s ChatGPT in November 2022. As businesses and organizations increasingly incorporate these models into their operations, fine-tuning them for their own needs, the researchers argue that understanding their limitations and biases has become essential.

“Less transparency makes it harder for other businesses to know if they can safely build applications that rely on commercial foundation models; for academics to rely on commercial foundation models for research; for policymakers to design meaningful policies to rein in this powerful technology; and for consumers to understand model limitations or seek redress for harms caused,” writes Stanford in a news release.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail