Category:

Editor’s Pick

Enlarge (credit: Getty Images / Benj Edwards)

Japanese telecommunications giant SoftBank recently announced that it has been developing “emotion-canceling” technology powered by AI that will alter the voices of angry customers to sound calmer during phone calls with customer service representatives. The project aims to reduce the psychological burden on operators suffering from harassment and has been in development for three years. Softbank plans to launch it by March 2026, but the idea is receiving mixed reactions online.

According to a report from the Japanese news site The Asahi Shimbun, SoftBank’s project relies on an AI model to alter the tone and pitch of a customer’s voice in real-time during a phone call. SoftBank’s developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones.

Voice cloning and synthesis technology has made massive strides in the past three years. We’ve previously covered technology from Microsoft that can clone a voice with a three-second audio sample and audio-processing technology from Adobe that cleans up audio by re-synthesizing a person’s voice, so SoftBank’s technology is well within the realm of plausibility.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hardware manufacturer Asus has released updates patching multiple critical vulnerabilities that allow hackers to remotely take control of a range of router models with no authentication or interaction required of end users.

The most critical vulnerability, tracked as CVE-2024-3080 is an authentication bypass flaw that can allow remote attackers to log into a device without authentication. The vulnerability, according to the Taiwan Computer Emergency Response Team / Coordination Center (TWCERT/CC), carries a severity rating of 9.8 out of 10. Asus said the vulnerability affects the following routers:

Model name
Support Site link

XT8 and XT8_V2
https://www.asus.com/uk/supportonly/asus%20zenwifi%20ax%20(xt8)/helpdesk_bios/

RT-AX88U
https://www.asus.com/supportonly/RT-AX88U/helpdesk_bios/

RT-AX58U
https://www.asus.com/supportonly/RT-AX58U/helpdesk_bios/

RT-AX57
https://www.asus.com/networking-iot-servers/wifi-routers/asus-wifi-routers/rt-ax57/helpdesk_bios

RT-AC86U
https://www.asus.com/supportonly/RT-AC86U/helpdesk_bios/

RT-AC68U
https://www.asus.com/supportonly/RT-AC68U/helpdesk_bios/

A favorite haven for hackers

A second vulnerability tracked as CVE-2024-3079 affects the same router models. It stems from a buffer overflow flaw and allows remote hackers who have already obtained administrative access to an affected router to execute commands.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Ransomware criminals have quickly weaponized an easy-to-exploit vulnerability in the PHP programming language that executes malicious code on web servers, security researchers said.

As of Thursday, Internet scans performed by security firm Censys had detected 1,000 servers infected by a ransomware strain known as TellYouThePass, down from 1,800 detected on Monday. The servers, primarily located in China, no longer display their usual content; instead, many list the site’s file directory, which shows all files have been given a .locked extension, indicating they have been encrypted. An accompanying ransom note demands roughly $6,500 in exchange for the decryption key.

When opportunity knocks

The vulnerability, tracked as CVE-2024-4577 and carrying a severity rating of 9.8 out of 10, stems from errors in the way PHP converts Unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to convert user-supplied input into characters that pass malicious commands to the main PHP application. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon. (credit: Getty Images)

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game’s physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module’s descent onto the Moon’s surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Last month, Wells Fargo terminated over a dozen bank employees following an investigation into claims of faking work activity on their computers, according to a Bloomberg report.

A Financial Industry Regulatory Authority (FINRA) search conducted by Ars confirmed that the fired members of the firm’s wealth and investment management division were “discharged after review of allegations involving simulation of keyboard activity creating impression of active work.”

A rise in remote work during the COVID-19 pandemic accelerated the adoption of remote worker surveillance techniques, especially those using software installed on machines that keeps track of activity and reports back to corporate management. It’s worth noting that the Bloomberg report says the FINRA filing does not specify whether the fired Wells Fargo employees were simulating activity at home or in an office.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: OpenAI / Apple / Benj Edwards)

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo illustration of what a shirt-button camera could look like. (credit: Aurich Lawson | Getty Images)

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. (credit: HorneyMetalBeing)

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a “Driving Behavior Data History Report” to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times’ Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every “hard acceleration” or “hard braking event,” among other data points.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail