Category:

Editor’s Pick

Enlarge / OpenAI Chief Executive Officer Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC. (Photo by Kent Nishimura/Getty Images) (credit: Getty Images)

On Thursday, The Wall Street Journal reported that OpenAI CEO Sam Altman is in talks with investors to raise as much as $5 trillion to $7 trillion for AI chip manufacturing, according to people familiar with the matter. The funding seeks to address the scarcity of graphics processing units (GPUs) crucial for training and running large language models like those that power ChatGPT, Microsoft Copilot, and Google Gemini.

The high dollar amount reflects the huge amount of capital necessary to spin up new semiconductor manufacturing capability.

To hit these ambitious targets—which are larger than the entire semiconductor industry’s current $527 billion global sales combined—Altman has reportedly met with a range of potential investors worldwide, including sovereign wealth funds and government entities, notably the United Arab Emirates, SoftBank CEO Masayoshi Son, and representatives from Taiwan Semiconductor Manufacturing Co. (TSMC).

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

As Apple has stepped up its promotion of its App Store as a safer and more trustworthy source of apps, its operators scrambled Thursday to correct a major threat to that narrative: a listing that password manager maker LastPass said was a “fraudulent app impersonating” its brand.

At the time this article on Ars went live, Apple had removed the app—titled LassPass and bearing a logo strikingly similar to the one used by LastPass—from its App Store. At the same time, Apple allowed a separate app submitted by the same developer to remain. Apple provided no explanation for the reason for removing the former app or for allowing the latter one to remain.

Apple warns of “new risks” from competition

The move comes as Apple has beefed up its efforts to promote the App Store as a safer alternative to competing sources of iOS apps mandated recently by the European Union. In an interview with App Store head Phil Schiller published this month by FastCompany, Schiller said the new app stores will “bring new risks”—including pornography, hate speech, and other forms of objectionable content—that Apple has long kept at bay.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Google)

On Thursday, Google announced that its ChatGPT-like AI assistant, previously called Bard, is now called “Gemini,” renamed to reflect the underlying AI language model Google launched in December. Additionally, Google has launched its most capable AI model, Ultra 1.0, for the first time as part of “Gemini Advanced,” a $20/month subscription feature.

Untangling Google’s naming scheme and how to access the new model is somewhat confusing. To tease out the nomenclature, think of an AI app like Google Bard as a car brand that can swap out different engines under the hood. It’s an AI assistant—an application of an AI model with a convenient interface—that can use different AI “engines” to work.

When Bard launched in March 2023, it used a large language model called LaMDA as its engine. In May 2023, Google upgraded Bard to utilize its PaLM 2 language model. In December, Google upgraded Bard yet again to use its Gemini Pro AI model. It’s important to note that when Google first announced Gemini (the AI model), the company said it would ship in three sizes that roughly reflected its processing capability: Nano, Pro, and Ultra (with larger being “better”). Until now, Pro was the most capable version of the Gemini model publicly available.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

Linux developers are in the process of patching a high-severity vulnerability that, in certain cases, allows the installation of malware that runs at the firmware level, giving infections access to the deepest parts of a device where they’re hard to detect or remove.

The vulnerability resides in shim, which in the context of Linux is a small component that runs in the firmware early in the boot process before the operating system has started. More specifically, the shim accompanying virtually all Linux distributions plays a crucial role in secure boot, a protection built into most modern computing devices to ensure every link in the boot process comes from a verified, trusted supplier. Successful exploitation of the vulnerability allows attackers to neutralize this mechanism by executing malicious firmware at the earliest stages of the boot process before the Unified Extensible Firmware Interface firmware has loaded and handed off control to the operating system.

The vulnerability, tracked as CVE-2023-40547, is what’s known as a buffer overflow, a coding bug that allows attackers to execute code of their choice. It resides in a part of the shim that processes booting up from a central server on a network using the same HTTP that the Internet is based on. Attackers can exploit the code-execution vulnerability in various scenarios, virtually all following some form of successful compromise of either the targeted device or the server or network the device boots from.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Meta / Getty Images)

On Tuesday, Meta announced its plan to start labeling AI-generated images from other companies like OpenAI and Google, as reported by Reuters. The move aims to enhance transparency on platforms such as Facebook, Instagram, and Threads by informing users when the content they see is digitally synthesized media rather than an authentic photo or video.

Coming during a US election year that is expected to be contentious, Meta’s decision is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, which are capable of producing fake but realistic audio, images, and video from written prompts. (Even non-AI-generated fake content can potentially confuse social media users, as we covered yesterday.)

Meta President of Global Affairs Nick Clegg made the announcement in a blog post on Meta’s website. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” wrote Clegg. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Mass exploitation began over the weekend for yet another critical vulnerability in widely used VPN software sold by Ivanti, as hackers already targeting two previous vulnerabilities diversified, researchers said Monday.

The new vulnerability, tracked as CVE-2024-21893, is what’s known as a server-side request forgery. Ivanti disclosed it on January 22, along with a separate vulnerability that so far has shown no signs of being exploited. Last Wednesday, nine days later, Ivanti said CVE-2024-21893 was under active exploitation, aggravating an already chaotic few weeks. All of the vulnerabilities affect Ivanti’s Connect Secure and Policy Secure VPN products.

A tarnished reputation and battered security professionals

The new vulnerability came to light as two other vulnerabilities were already under mass exploitation, mostly by a hacking group researchers have said is backed by the Chinese government. Ivanti provided mitigation guidance for the two vulnerabilities on January 11, and released a proper patch last week. The Cybersecurity and Infrastructure Security Agency, meanwhile, mandated all federal agencies under its authority disconnect Ivanti VPN products from the Internet until they are rebuilt from scratch and running the latest software version.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City. (credit: Deb Cohn-Orbach/UCG/Universal Images Group via Getty Images)

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called “Signals.” The companies would not share financial details, but the amount of money is “substantial” to Semafor’s business, said a person familiar with the matter.

Signals will offer a feed of breaking news and analysis on big stories, with about a dozen posts a day. The goal is to offer different points of view from across the globe—a key focus for Semafor since its launch in 2022.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images / Benj Edwards)

On Sunday, a report from the South China Morning Post revealed a significant financial loss suffered by a multinational company’s Hong Kong office, amounting to HK$200 million (US$25.6 million), due to a sophisticated scam involving deepfake technology. The scam featured a digitally recreated version of the company’s chief financial officer, along with other employees, who appeared in a video conference call instructing an employee to transfer funds.

Due to an ongoing investigation, Hong Kong police did not release details of which company was scammed.

Deepfakes utilize AI tools to create highly convincing fake videos or audio recordings, posing significant challenges for individuals and organizations to discern real from fabricated content.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: WIRED staff/Getty Images)

Hacker-for-hire firms like NSO Group and Hacking Team have become notorious for enabling their customers to spy on vulnerable members of civil society. But as far back as a decade ago in India, a startup called Appin Technology and its subsidiaries allegedly played a similar cyber-mercenary role while attracting far less attention. Over the past two years, a collection of people with direct and indirect links to that company have been working to keep it that way, using a campaign of legal threats to silence publishers and anyone else reporting on Appin Technology’s alleged hacking past. Now, a loose coalition of anti-censorship voices is working to make that strategy backfire.

For months, lawyers and executives with ties to Appin Technology and to a newer organization that shares part of its name, called the Association of Appin Training Centers, have used lawsuits and legal threats to carry out an aggressive censorship campaign across the globe. These efforts have demanded that more than a dozen publications amend or fully remove references to the original Appin Technology’s alleged illegal hacking or, in some cases, mentions of that company’s co-founder, Rajat Khare. Most prominently, a lawsuit against Reuters brought by the Association of Appin Training Centers resulted in a stunning order from a Delhi court: It demanded that Reuters take down its article based on a blockbuster investigation into Appin Technology that had detailed its alleged targeting and spying on opposition leaders, corporate competitors, lawyers, and wealthy individuals on behalf of customers worldwide. Reuters “temporarily” removed its article in compliance with that injunction and is fighting the order in Indian court.

As Appin Training Centers has sought to enforce that same order against a slew of other news outlets, however, resistance is building. Earlier this week, the digital rights group the Electronic Frontier Foundation (EFF) sent a response—published here—pushing back against Appin Training Centers’ legal threats on behalf of media organizations caught in this crossfire, including the tech blog Techdirt and the investigative news nonprofit MuckRock.

Read 17 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Federal civilian agencies have until midnight Saturday morning to sever all network connections to Ivanti VPN software, which is currently under mass exploitation by multiple threat groups. The US Cybersecurity and Infrastructure Security Agency mandated the move on Wednesday after disclosing three critical vulnerabilities in recent weeks.

Three weeks ago, Ivanti disclosed two critical vulnerabilities that it said threat actors were already actively exploiting. The attacks, the company said, targeted “a limited number of customers” using the company’s Connect Secure and Policy Secure VPN products. Security firm Volexity said on the same day that the vulnerabilities had been under exploitation since early December. Ivanti didn’t have a patch available and instead advised customers to follow several steps to protect themselves against attacks. Among the steps was running an integrity checker the company released to detect any compromises.

Almost two weeks later, researchers said the zero-days were under mass exploitation in attacks that were backdooring customer networks around the globe. A day later, Ivanti failed to make good on an earlier pledge to begin rolling out a proper patch by January 24. The company didn’t start the process until Wednesday, two weeks after the deadline it set for itself.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail