Category:

Editor’s Pick

Enlarge / A group of Black Friday online shopping purchases photographed in delivery boxes filled with polystyrene packing pellets, taken on September 13, 2019. (Photo by Neil Godwin/Future Publishing via Getty Images) (credit: Getty Images)

If you build a gadget that connects to the Internet and sell it in the United Kingdom, you can no longer make the default password “password.” In fact, you’re not supposed to have default passwords at all.

A new version of the 2022 Product Security and Telecommunications Infrastructure Act (PTSI) is now in effect, covering just about everything that a consumer can buy that connects to the web. Under the guidelines, even the tiniest Wi-Fi board must either have a randomized password or else generate a password upon initialization (through a smartphone app or other means). This password can’t be incremental (“password1,” “password54”), and it can’t be “related in an obvious way to public information,” such as MAC addresses or Wi-Fi network names. A device should be sufficiently strong against brute-force access attacks, including credential stuffing, and should have a “simple mechanism” for changing the password.

There’s more, and it’s just as head-noddingly obvious. Software components, where reasonable, “should be securely updateable,” should actually check for updates, and should update either automatically or in a way  “simple for the user to apply.” Perhaps most importantly, device owners can report security issues and expect to hear back about how that report is being handled.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Authentication service Okta is warning about the “unprecedented scale” of an ongoing campaign that routes fraudulent login requests through the mobile devices and browsers of everyday users in an attempt to conceal the malicious behavior.

The attack, Okta said, uses other means to camouflage the login attempts as well, including the TOR network and so-called proxy services from providers such as NSOCKS, Luminati, and DataImpulse, which can also harness users’ devices without their knowledge. In some cases, the affected mobile devices are running malicious apps. In other cases, users have enrolled their devices in proxy services in exchange for various incentives.

Unidentified adversaries then use these devices in credential-stuffing attacks, which use large lists of login credentials obtained from previous data breaches in an attempt to access online accounts. Because the requests come from IP addresses and devices with good reputations, network security devices don’t give them the same level of scrutiny as logins from virtual private servers (VPS) that come from hosting services threat actors have used for years.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hackers are assailing websites using a prominent WordPress plugin with millions of attempts to exploit a high-severity vulnerability that allows complete takeover, researchers said.

The vulnerability resides in WordPress Automatic, a plugin with more than 38,000 paying customers. Websites running the WordPress content management system use it to incorporate content from other sites. Researchers from security firm Patchstack disclosed last month that WP Automatic versions 3.92.0 and below had a vulnerability with a severity rating of 9.9 out of a possible 10. The plugin developer, ValvePress, silently published a patch, which is available in versions 3.92.1 and beyond.

Researchers have classified the flaw, tracked as CVE-2024-27956, as a SQL injection, a class of vulnerability that stems from a failure by a web application to query backend databases properly. SQL syntax uses apostrophes to indicate the beginning and end of a data string. By entering strings with specially positioned apostrophes into vulnerable website fields, attackers can execute code that performs various sensitive actions, including returning confidential data, giving administrative system privileges, or subverting how the web app works.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

In the world of AI, what might be called “small language models” have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They’re mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

On Tuesday, we covered Microsoft’s Phi-3 models, which aim to achieve something similar: a useful level of language understanding and processing performance in small AI models that can run locally. Phi-3-mini features 3.8 billion parameters, but some of Apple’s OpenELM models are much smaller, ranging from 270 million to 3 billion parameters in eight distinct models.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A now-abandoned USB worm that backdoors connected devices has continued to self-replicate for years since its creators lost control of it and remains active on thousands, possibly millions, of machines, researchers said Thursday.

The worm—which first came to light in a 2023 post published by security firm Sophos—became active in 2019 when a variant of malware known as PlugX added functionality that allowed it to infect USB drives automatically. In turn, those drives would infect any new machine they connected to, a capability that allowed the malware to spread without requiring any end-user interaction. Researchers who have tracked PlugX since at least 2008 have said that the malware has origins in China and has been used by various groups tied to the country’s Ministry of State Security.

Still active after all these years

For reasons that aren’t clear, the worm creator abandoned the one and only IP address that was designated as its command-and-control channel. With no one controlling the infected machines anymore, the PlugX worm was effectively dead, or at least one might have presumed so. The worm, it turns out, has continued to live on in an undetermined number of machines that possibly reaches into the millions, researchers from security firm Sekoia reported.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Thursday, Baltimore County Police arrested Pikesville High School’s former athletic director, Dazhon Darien, and charged him with using AI to impersonate Principal Eric Eiswert, according to a report by The Baltimore Banner. Police say Darien used AI voice synthesis software to simulate Eiswert’s voice, leading the public to believe the principal made racist and antisemitic comments.

The audio clip, posted on a popular Instagram account, contained offensive remarks about “ungrateful Black kids” and their academic performance, as well as a threat to “join the other side” if the speaker received one more complaint from “one more Jew in this community.” The recording also mentioned names of staff members, including Darien’s nickname “DJ,” suggesting they should not have been hired or should be removed “one way or another.”

The comments led to significant uproar from students, faculty, and the wider community, many of whom initially believed the principal had actually made the comments. A Pikesville High School teacher named Shaena Ravenell reportedly played a large role in disseminating the audio. While she has not been charged, police indicated that she forwarded the controversial email to a student known for their ability to quickly spread information through social media. This student then escalated the audio’s reach, which included sharing it with the media and the NAACP.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hackers backed by a powerful nation-state have been exploiting two zero-day vulnerabilities in Cisco firewalls in a five-month-long campaign that breaks into government networks around the world, researchers reported Wednesday.

The attacks against Cisco’s Adaptive Security Appliances firewalls are the latest in a rash of network compromises that target firewalls, VPNs, and network-perimeter devices, which are designed to provide a moated gate of sorts that keeps remote hackers out. Over the past 18 months, threat actors—mainly backed by the Chinese government—have turned this security paradigm on its head in attacks that exploit previously unknown vulnerabilities in security appliances from the likes of Ivanti, Atlassian, Citrix, and Progress. These devices are ideal targets because they sit at the edge of a network, provide a direct pipeline to its most sensitive resources, and interact with virtually all incoming communications.

Cisco ASA likely one of several targets

On Wednesday, it was Cisco’s turn to warn that its ASA products have received such treatment. Since November, a previously unknown actor tracked as UAT4356 by Cisco and STORM-1849 by Microsoft has been exploiting two zero-days in attacks that go on to install two pieces of never-before-seen malware, researchers with Cisco’s Talos security team said. Notable traits in the attacks include:

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.

The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.

In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hackers abused an antivirus service for five years in order to infect end users with malware. The attack worked because the service delivered updates over HTTP, a protocol vulnerable to attacks that corrupt or tamper with data as it travels over the Internet.

The unknown hackers, who may have ties to the North Korean government, pulled off this feat by performing a man-in-the-middle (MiitM) attack that replaced the genuine update with a file that installed an advanced backdoor instead, said researchers from security firm Avast today.

eScan, an AV service headquartered in India, has delivered updates over HTTP since at least 2019, Avast researchers reported. This protocol presented a valuable opportunity for installing the malware, which is tracked in security circles under the name GuptiMiner.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, Microsoft announced a new, freely available lightweight AI language model named Phi-3-mini, which is simpler and less expensive to operate than traditional large language models (LLMs) like OpenAI’s GPT-4 Turbo. Its small size is ideal for running locally, which could bring an AI model of similar capability to the free version of ChatGPT to a smartphone without needing an Internet connection to run it.

The AI field typically measures AI language model size by parameter count. Parameters are numerical values in a neural network that determine how the language model processes and generates text. They are learned during training on large datasets and essentially encode the model’s knowledge into quantified form. More parameters generally allow the model to capture more nuanced and complex language-generation capabilities but also require more computational resources to train and run.

Some of the largest language models today, like Google’s PaLM 2, have hundreds of billions of parameters. OpenAI’s GPT-4 is rumored to have over a trillion parameters but spread over eight 220-billion parameter models in a mixture-of-experts configuration. Both models require heavy-duty data center GPUs (and supporting systems) to run properly.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail