Category:

Editor’s Pick

Enlarge (credit: Getty Images)

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to dramatically surpass other AI video generators in capability. It seems that similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Two days after an international team of authorities struck a major blow at LockBit, one of the Internet’s most prolific ransomware syndicates, researchers have detected a new round of attacks that are installing malware associated with the group.

The attacks, detected in the past 24 hours, are exploiting two critical vulnerabilities in ScreenConnect, a remote desktop application sold by Connectwise. According to researchers at two security firms—SophosXOps and Huntress—attackers who successfully exploit the vulnerabilities go on to install LockBit ransomware and other post-exploit malware. It wasn’t immediately clear if the ransomware was the official LockBit version.

“We can’t publicly name the customers at this time but can confirm the malware being deployed is associated with LockBit, which is particularly interesting against the backdrop of the recent LockBit takedown,” John Hammond, principal security researcher at Huntress, wrote in an email. “While we can’t attribute this directly to the larger LockBit group, it is clear that LockBit has a large reach that spans tooling, various affiliate groups, and offshoots that have not been completely erased even with the major takedown by law enforcement.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background. (credit: Stability AI)

On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called “prompts” and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

Since 2022, we’ve seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI’s DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse. (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Generations from Gemini AI from the prompt, “Paint me a historically accurate depiction of a medieval British king.” (credit: @stratejake / X)

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” wrote Google in a statement Thursday morning.

As more people on X began to pile on Google for being “woke,” the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out, some of these inaccurate depictions “were essentially erasing the history of race and gender discrimination.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

More than 70,000 AT&T cellular customers reported being unable to connect to service early Thursday morning. While early reports suggested multiple carriers, including Verizon and T-Mobile, seemed to be affected, that appears to be a knock-on effect of a major network going down.

Service monitoring site Downdetector was showing multiple post-paid and pre-paid carriers as having increased outage reports starting at around 4 am Eastern time. An Ars editor in Texas has seen “SOS” on their iPhone since 4:30 am Eastern time and has been unable to make Wi-Fi calls.

AT&T acknowledged the outage to CNBC, telling the network that it was “working urgently to restore service” to customers and that it recommended Wi-Fi calling until service could be restored. Verizon and T-Mobile told CNBC that they suspected their customers were reporting outages when they could not reach customers on AT&T or resellers that use AT&T networks.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

iMessage is getting a major makeover that makes it among the two messaging apps most prepared to withstand the coming advent of quantum computing, largely at parity with Signal or arguably incrementally more hardened.

On Wednesday, Apple said messages sent through iMessage will now be protected by two forms of end-to-end encryption (E2EE), whereas before, it had only one. The encryption being added, known as PQ3, is an implementation of a new algorithm called Kyber that, unlike the algorithms iMessage has used until now, can’t be broken with quantum computing. Apple isn’t replacing the older quantum-vulnerable algorithm with PQ3—it’s augmenting it. That means, for the encryption to be broken, an attacker will have to crack both.

Making E2EE future safe

The iMessage changes come five months after the Signal Foundation, maker of the Signal Protocol that encrypts messages sent by more than a billion people, updated the open standard so that it, too, is ready for post-quantum computing (PQC). Just like Apple, Signal added Kyber to X3DH, the algorithm it was using previously. Together, they’re known as PQXDH.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Google)

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It’s Google’s first significant open large language model (LLM) release since OpenAI’s ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google’s most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means “precious stone.”

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI’s AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant “having a stroke,” “going insane,” “rambling,” and “losing it.” OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called “anthropomorphization”) seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They’re forced to use those terms because OpenAI doesn’t share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

“It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia,” wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. “It’s the first time anything AI related sincerely gave me the creeps.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

After years of being outmaneuvered by snarky ransomware criminals who tease and brag about each new victim they claim, international authorities finally got their chance to turn the tables, and they aren’t squandering it.

The top-notch trolling came after authorities from the US, UK, and Europol took down most of the infrastructure belonging to Lockbit, a ransomware syndicate that has extorted more than $120 million from thousands of victims around the world. On Tuesday, most of the sites Lockbit uses to shame its victims for being hacked, pressure them into paying, and brag of their hacking prowess began displaying content announcing the takedown. The seized infrastructure also hosted decryptors victims could use to recover their data.

this_is_really_bad

Authorities didn’t use the seized name-and-shame site solely for informational purposes. One section that appeared prominently gloated over the extraordinary extent of the system access investigators gained. Several images indicated they had control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. This file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail