Category:

Editor’s Pick

Enlarge

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defence. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

Read 19 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

As many as 165 customers of cloud storage provider Snowflake have been compromised by a group that obtained login credentials through information-stealing malware, researchers said Monday.

On Friday, Lending Tree subsidiary QuoteWizard confirmed it was among the customers notified by Snowflake that it was affected in the incident. Lending Tree spokesperson Megan Greuling said the company is in the process of determining whether data stored on Snowflake has been stolen.

“That investigation is ongoing,” she wrote in an email. “As of this time, it does not appear that consumer financial account information was impacted, nor information of the parent entity, Lending Tree.”

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Apple)

On Monday, Apple debuted “Apple Intelligence,” a new suite of AI-powered features for iOS 18, iPadOS 18, macOS Sequoia that includes creating email summaries, generating images and emoji, and allowing Siri to take actions on your behalf. It’s achieved through a combination of on-device and in-cloud processing, and Apple says it focuses on privacy. Apple says that Apple Intelligence features will be widely available later this year and will be available as a beta test for developers this summer.

The announcements came during a livestream WWDC Keynote and a simultaneous event attended by press on Apple’s campus in Cupertino, California. In an introduction, Apple CEO Tim Cook said the company has been using machine learning for years, but the introduction of large language models (LLMs) presents new opportunities to elevate the capabilities of Apple products. He emphasized the need for both personalization and privacy in Apple’s approach.

At last year’s WWDC, Apple avoided using the term “AI” completely, instead preferring terms like “machine learning” as Apple’s way of avoiding buzzy hype while integrating applications of AI into apps in useful ways. This year, Apple figured out a new way to largely avoid the abbreviation “AI” by coining “Apple Intelligence,” a catchall branding term that refers to a broad group of machine learning, LLM, and image generation technologies. By our count, the term “AI” only appeared once in the keynote: Near the end of the presentation, Apple executive Craig Federighi said, “It’s AI for the rest of us.”

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts.

Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend.

When “Best Fit” isn’t

“A nasty bug with a very simple exploit—perfect for a Friday afternoon,” researchers with security firm WatchTowr wrote.

Read 16 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

After acquiring VMware, Broadcom swiftly enacted widespread changes that resulted in strong public backlash. A new survey of 300 director-level IT workers at companies that are customers of North American VMware provides insight into the customer reaction to Broadcom’s overhaul.

The survey released Thursday doesn’t provide feedback from every VMware customer, but it’s the first time we’ve seen responses from IT decision-makers working for companies paying for VMware products. It echos concerns expressed at the announcement of some of Broadcom’s more controversial changes to VMware, like the end of perpetual licenses and growing costs.

CloudBolt Software commissioned Wakefield Research, a market research agency, to run the study from May 9 through May 23. The “CloudBolt Industry Insights Reality Report: VMware Acquisition Aftermath” includes responses from workers at 150 companies with fewer than 1,000 workers and 150 companies with more than 1,000 workers. Survey respondents were invited via email and took the survey online, with the report authors writing that results are subject to sampling variation of ±5.7 percentage points at a 95 percent confidence level.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

The FBI is urging victims of one of the most prolific ransomware groups to come forward after agents recovered thousands of decryption keys that may allow the recovery of data that has remained inaccessible for months or years.

The revelation, made Wednesday by a top FBI official, comes three months after an international roster of law enforcement agencies seized servers and other infrastructure used by LockBit, a ransomware syndicate that authorities say has extorted more than $1 billion from 7,000 victims around the world. Authorities said at the time that they took control of 1,000 decryption keys, 4,000 accounts, and 34 servers and froze 200 cryptocurrency accounts associated with the operation.

At a speech before a cybersecurity conference in Boston, FBI Cyber Assistant Director Bryan Vorndran said Wednesday that agents have also recovered an asset that will be of intense interest to thousands of LockBit victims—the decryption keys that could allow them to unlock data that’s been held for ransom by LockBit associates.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: DuckDuckGo)

On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using “!ai” or “!chat” shortcuts in the search field. AI Chat can also be disabled in the site’s settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A visual from the fake documentary “Olympics Has Fallen” produced by Russia-affiliated influence actor Storm-1679. (credit: Microsoft)

Last year, a feature-length documentary purportedly produced by Netflix began circulating on Telegram. Titled “Olympics have Fallen” and narrated by a voice with a striking similarity to that of actor Tom Cruise, it sharply criticized the leadership of the International Olympic Committee. The slickly produced film, claiming five-star reviews from the New York Times, Washington Post, and BBC, was quickly amplified on social media. Among those seemingly endorsing the documentary were celebrities on the platform Cameo.

A recently published report by Microsoft (PDF) said the film was not a documentary, had received no such reviews, and that the narrator’s voice was an AI-produced deep fake of Cruise. It also said the endorsements on Cameo were faked. The Microsoft Threat Intelligence Report went on to say that the fraudulent documentary and endorsements were only one of many elaborate hoaxes created by agents of the Russian government in a year-long influence operation intended to discredit the International Olympic Committee (IOC) and deter participation and attendance at the Paris Olympics starting next month.

Other examples of the Kremlin’s ongoing influence operation include:

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include “further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

They also assert that AI companies possess substantial non-public information about their systems’ capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail