Category:

Editor’s Pick

Enlarge (credit: Getty Images)

The FBI is urging victims of one of the most prolific ransomware groups to come forward after agents recovered thousands of decryption keys that may allow the recovery of data that has remained inaccessible for months or years.

The revelation, made Wednesday by a top FBI official, comes three months after an international roster of law enforcement agencies seized servers and other infrastructure used by LockBit, a ransomware syndicate that authorities say has extorted more than $1 billion from 7,000 victims around the world. Authorities said at the time that they took control of 1,000 decryption keys, 4,000 accounts, and 34 servers and froze 200 cryptocurrency accounts associated with the operation.

At a speech before a cybersecurity conference in Boston, FBI Cyber Assistant Director Bryan Vorndran said Wednesday that agents have also recovered an asset that will be of intense interest to thousands of LockBit victims—the decryption keys that could allow them to unlock data that’s been held for ransom by LockBit associates.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: DuckDuckGo)

On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using “!ai” or “!chat” shortcuts in the search field. AI Chat can also be disabled in the site’s settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A visual from the fake documentary “Olympics Has Fallen” produced by Russia-affiliated influence actor Storm-1679. (credit: Microsoft)

Last year, a feature-length documentary purportedly produced by Netflix began circulating on Telegram. Titled “Olympics have Fallen” and narrated by a voice with a striking similarity to that of actor Tom Cruise, it sharply criticized the leadership of the International Olympic Committee. The slickly produced film, claiming five-star reviews from the New York Times, Washington Post, and BBC, was quickly amplified on social media. Among those seemingly endorsing the documentary were celebrities on the platform Cameo.

A recently published report by Microsoft (PDF) said the film was not a documentary, had received no such reviews, and that the narrator’s voice was an AI-produced deep fake of Cruise. It also said the endorsements on Cameo were faked. The Microsoft Threat Intelligence Report went on to say that the fraudulent documentary and endorsements were only one of many elaborate hoaxes created by agents of the Russian government in a year-long influence operation intended to discredit the International Olympic Committee (IOC) and deter participation and attendance at the Paris Olympics starting next month.

Other examples of the Kremlin’s ongoing influence operation include:

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

They also assert that AI companies possess substantial non-public information about their systems’ capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

A ransomware attack that crippled a London-based medical testing and diagnostics provider has led several major hospitals in the city to declare a critical incident emergency and cancel non-emergency surgeries and pathology appointments, it was widely reported Tuesday.

The attack was detected Monday against Synnovis, a supplier of blood tests, swabs, bowel tests, and other hospital services in six London boroughs. The company said it has “affected all Synnovis IT systems, resulting in interruptions to many of our pathology services.” The company gave no estimate of when its systems would be restored and provided no details about the attack or who was behind it.

Major impact

The outage has led hospitals, including Guy’s and St Thomas’ and King’s College Hospital Trusts, to cancel operations and procedures involving blood transfusions. The cancellations include transplant surgeries, which require blood transfusions.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Zoom CEO Eric Yuan has a vision for the future of work: sending your AI-powered digital twin to attend meetings on your behalf. In an interview with The Verge’s Nilay Patel published Monday, Yuan shared his plans for Zoom to become an “AI-first company,” using AI to automate tasks and reduce the need for human involvement in day-to-day work.

“Let’s say the team is waiting for the CEO to make a decision or maybe some meaningful conversation, my digital twin really can represent me and also can be part of the decision making process,” Yuan said in the interview. “We’re not there yet, but that’s a reason why there’s limitations in today’s LLMs.”

LLMs are large language models—text-predicting AI models that power AI assistants like ChatGPT and Microsoft Copilot. They can output very convincing human-like text based on probabilities, but they are far from being able to replicate human reasoning. Still, Yuan suggests that instead of relying on a generic LLM to impersonate you, in the future, people will train custom LLMs to simulate each person.

Read 16 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Cloud storage provider Snowflake said that accounts belonging to multiple customers have been hacked after threat actors obtained credentials through info-stealing malware or by purchasing them on online crime forums.

Ticketmaster parent Live Nation—which disclosed Friday that hackers gained access to data it stored through an unnamed third-party provider—told TechCrunch the provider was Snowflake. The live-event ticket broker said it identified the hack on May 20, and a week later, a “criminal threat actor offered what it alleged to be Company user data for sale via the dark web.”

Ticketmaster is one of six Snowflake customers to be hit in the hacking campaign, said independent security researcher Kevin Beaumont, citing conversations with people inside the affected companies. Australia’s Signal Directorate said Saturday it knew of “successful compromises of several companies utilizing Snowflake environments.” Researchers with security firm Hudson Rock said in a now-deleted post that Santander, Spain’s biggest bank, was also hacked in the campaign. The researchers cited online text conversations with the threat actor. Last month, Santander disclosed a data breach affecting customers in Chile, Spain, and Uruguay.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Nvidia’s CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024. (credit: SAM YEH/AFP via Getty Images)

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company’s next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called “Rubin” set for 2026.

Nvidia’s data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company’s premature announcement of the next iteration of a tech product eats into the current iteration’s sales. “This is the very first time that this next click as been made,” Huang said, holding up his presentation remote just before the Rubin announcement. “And I’m not sure yet whether I’m going to regret this or not.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications’ writers—and the unions that represent them—were surprised by the announcements and aren’t happy about it. Already, two unions have released statements expressing “alarm” and “concern.”

“The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI,” reads a statement from the Atlantic union. “And especially by management’s complete lack of transparency about what the agreement entails and how it will affect our work.”

The Vox Union—which represents The Verge, SB Nation, and Vulture, among other publications—reacted in similar fashion, writing in a statement, “Today, members of the Vox Media Union … were informed without warning that Vox Media entered into a ‘strategic content and product partnership’ with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Google “G” logo surrounded by whimsical characters, all of which look stunned and surprised. (credit: Google)

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, “AI Overviews: About last week.” In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn’t realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google’s web ranking systems. Right now, it’s an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is “highly effective” and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail