Category:

Editor’s Pick

Enlarge (credit: Getty Images | Benj Edwards)

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Nadezhda Kozhedub)

UEFI firmware from five of the leading suppliers contains vulnerabilities that allow attackers with a toehold in a user’s network to infect connected devices with malware that runs at the firmware level.

The vulnerabilities, which collectively have been dubbed PixieFail by the researchers who discovered them, pose a threat mostly to public and private data centers, and their users of course. People with even minimal access to such a network—say a paying customer, a low-level employee, or an attacker who has already gained limited entry—can exploit the vulnerabilities to infect connected devices with a malicious UEFI. Short for Unified Extensible Firmware Interface, UEFI is the low-level and complex chain of firmware responsible for booting up virtually every modern computer. By installing malicious firmware that runs prior to the loading of a main OS, UEFI infections can’t be detected or removed using standard endpoint protections. They also give unusually broad control of the infected device.

Five vendors, and many a customer, affected

The nine vulnerabilities that comprise PixieFail reside in TianoCore EDK II, an open source implementation of the UEFI specification. The implementation is incorporated into offerings from Arm Ltd., Insyde, AMI, Phoenix Technologies, and Microsoft. The flaws reside in functions related to IPv6, the successor to the IPv4 Internet Protocol network address system. They can be exploited in what’s known as the PXE, or Preboot Execution Environment, when it’s configured to use IPv6.

Read 16 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards | Getty Images)

Imagine downloading an open source AI language model, and all seems well at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—released a research paper about AI “sleeper agent” large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. “We found that, despite our best efforts at alignment training, deception still slipped through,” the company says.

In a thread on X, Anthropic described the methodology in a paper titled “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training.” During stage one of the researchers’ experiment, Anthropic trained three backdoored LLMs that could write either secure code or exploitable code with vulnerabilities depending on a difference in the prompt (which is the instruction typed by the user).

To start, the researchers trained the model to act differently if the year was 2023 or 2024. Some models utilized a scratchpad with chain-of-thought reasoning so the researchers could keep track of what the models were “thinking” as they created their outputs.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called “Tasks” in the corner. (credit: xckd / Swarovski)

Last week, Austria-based Swarovski Optik introduced the AX Visio 10×32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world’s first “smart binoculars,” and they come with a hefty price tag—$4,799.

“The AX Visio are the world’s first AI-supported binoculars,” the company says in the product’s press release. “At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions.”

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson | Getty Images)

Chinese authorities recently said they’re using an advanced encryption attack to de-anonymize users of AirDrop in an effort to crack down on citizens who use the Apple file-sharing feature to mass-distribute content that’s illegal in that country.

According to a 2022 report from The New York Times, activists have used AirDrop to distribute scathing critiques of the Chinese Communist Party to nearby iPhone users in subway trains and stations and other public venues. A document one protester sent in October of that year called General Secretary Xi Jinping a “despotic traitor.” A few months later, with the release of iOS 16.1.1, the AirDrop users in China found that the “everyone” configuration, the setting that makes files available to all other users nearby, automatically reset to the more contacts-only setting. Apple has yet to acknowledge the move. Critics continue to see it as a concession Apple CEO Tim Cook made to Chinese authorities.

The rainbow connection

On Monday, eight months after the half-measure was put in place, officials with the local government in Beijing said some people have continued mass-sending illegal content. As a result, the officials said, they were now using an advanced technique publicly disclosed in 2021 to fight back.

Read 18 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Professor Jeff Jarvis, CUNY Graduate School of Journalism; Curtis LeGeyt, president and CEO of National Association of Broadcasters; and Roger Lynch, CEO of Condé Nast, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism.” (credit: Getty Images)

On Wednesday, news industry executives urged Congress for legal clarification that using journalism to train AI assistants like ChatGPT is not fair use, as claimed by companies such as OpenAI. Instead, they would prefer a licensing regime for AI training content that would force Big Tech companies to pay for content in a method similar to rights clearinghouses for music.

The plea for action came during a US Senate Judiciary Committee hearing titled “Oversight of A.I.: The Future of Journalism,” chaired by Sen. Richard Blumenthal of Connecticut, with Sen. Josh Hawley of Missouri also playing a large role in the proceedings. Last year, the pair of senators introduced a bipartisan framework for AI legislation and held a series of hearings on the impact of AI.

Blumenthal described the situation as an “existential crisis” for the news industry and cited social media as a cautionary tale for legislative inaction about AI. “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Read 15 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

VMware’s new owner is ending the virtualization and cloud computing company’s partner programs. It’s unclear who or how many current partners will be able to sell VMware-related offerings after April 2024, leaving potential for tens of thousands of businesses to be disrupted.

Broadcom, which closed its VMware acquisition in November, told The Register in late December that “effective February 5, 2024, Broadcom will be transitioning VMware’s partner programs to the invitation-only Broadcom Advantage Partner Program.” This signaled the end of VMware’s partnerships with solution providers, resellers, and distributors. But today’s news reportedly reveals a final closure date for the cloud services provider partner program, which debuted in 2019.

Today, The Register reported that Broadcom recently shared an end-of-partnership date specifically for VMware cloud service provider partners, which work with VMware through the VMware Partner Connect Program that launched in 2020.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Unknown threat actors are actively targeting two critical zero-day vulnerabilities that allow them to bypass two-factor authentication and execute malicious code inside networks that use a widely used virtual private network appliance sold by Ivanti, researchers said Wednesday.

Ivanti reported bare-bones details concerning the zero-days in posts published on Wednesday that urged customers to follow mitigation guidance immediately. Tracked as CVE-2023-846805 and CVE-2024-21887, they reside in Ivanti Connect Secure, a VPN appliance often abbreviated as ICS. Formerly known as Pulse Secure, the widely used VPN has harbored previous zero-days in recent years that came under widespread exploitation, in some cases to devastating effect.

Exploiters: Start your engines

“When combined, these two vulnerabilities make it trivial for attackers to run commands on the system,” researchers from security firm Volexity wrote in a post summarizing their investigative findings of an attack that hit a customer last month. “In this particular incident, the attacker leveraged these exploits to steal configuration data, modify existing files, download remote files, and reverse tunnel from the ICS VPN appliance.” Researchers Matthew Meltzer, Robert Jan Mora, Sean Koessel, Steven Adair, and Thomas Lancaster went on to write:

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A screenshot of the OpenAI GPT Store provided by OpenAI. (credit: OpenAI)

On Wednesday, OpenAI announced the launch of its GPT Store—a way for ChatGPT users to share and discover custom chatbot roles called “GPTs”—and ChatGPT Team, a collaborative ChatGPT workspace and subscription plan. OpenAI bills the new store as a way to “help you find useful and popular custom versions of ChatGPT” for members of Plus, Team, or Enterprise subscriptions.

“It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT,” writes OpenAI in its promotional blog. “Many builders have shared their GPTs for others to use. Today, we’re starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs.”

OpenAI launched GPTs on November 6, 2023, as part of its DevDay event. Each GPT includes custom instructions and/or access to custom data or external APIs that can potentially make a custom GPT personality more useful than the vanilla ChatGPT-4 model. Before the GPT Store launch, paying ChatGPT users could create and share custom GPTs with others (by setting the GPT public and sharing a link to the GPT), but there was no central repository for browsing and discovering user-designed GPTs on the OpenAI website.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

For the past year, previously unknown self-replicating malware has been compromising Linux devices around the world and installing cryptomining malware that takes unusual steps to conceal its inner workings, researchers said.

The worm is a customized version of Mirai, the botnet malware that infects Linux-based servers, routers, Web cameras, and other so-called Internet-of-things devices. Mirai came to light in 2016 when it was used to deliver record-setting distributed denial-of-service attacks that paralyzed key parts of the Internet that year. The creators soon released the underlying source code, a move that allowed a wide array of crime groups from around the world to incorporate Mirai into their own attack campaigns. Once taking hold of a Linux device, Mirai uses it as a platform to infect other vulnerable devices, a design that makes it a worm, meaning it self-replicates.

Dime-a-dozen malware with a twist

Traditionally, Mirai and its many variants have spread when one infected device scans the Internet looking for other devices that accept Telnet connections. The infected devices then attempt to crack the telnet password by guessing default and commonly used credential pairs. When successful, the newly infected devices target additional devices, using the same technique. Mirai has primarily been used to wage DDoSes. Given the large amounts of bandwidth available to many such devices, the floods of junk traffic are often huge, giving the botnet as a whole tremendous power.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail