Category:

Editor’s Pick

Enlarge (credit: Getty Images)

GitHub is struggling to contain an ongoing attack that’s flooding the site with millions of code repositories. These repositories contain obfuscated malware that steals passwords and cryptocurrency from developer devices, researchers said.

The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that’s wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.

Whack-a-mole

“Most of the forked repos are quickly removed by GitHub, which identifies the automation,” Matan Giladi and Gil David, researchers at security firm Apiiro, wrote Wednesday. “However, the automation detection seems to miss many repos, and the ones that were uploaded manually survive. Because the whole attack chain seems to be mostly automated on a large scale, the 1% that survive still amount to thousands of malicious repos.”

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Monday, Microsoft announced plans to offer AI models from Mistral through its Azure cloud computing platform, which came in conjunction with a 15 million euro non-equity investment in the French firm, which is often seen as a European rival to OpenAI. Since then, the investment deal has faced scrutiny from EU regulators.

Microsoft’s deal with Mistral, known for its large language models akin to OpenAI’s GPT-4 (which powers the subscription versions of ChatGPT), marks a notable expansion of its AI portfolio at a time when its well-known investment in California-based OpenAI has raised regulatory eyebrows. The new deal with Mistral drew particular attention from regulators because Microsoft’s investment could convert into equity (partial ownership of Mistral as a company) during Mistral’s next funding round.

The development has intensified ongoing investigations into Microsoft’s practices, particularly related to the tech giant’s dominance in the cloud computing sector. According to Reuters, EU lawmakers have voiced concerns that Mistral’s recent lobbying for looser AI regulations might have been influenced by its relationship with Microsoft. These apprehensions are compounded by the French government’s denial of prior knowledge of the deal, despite earlier lobbying for more lenient AI laws in Europe. The situation underscores the complex interplay between national interests, corporate influence, and regulatory oversight in the rapidly evolving AI landscape.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A view of a Wendy’s store on August 9, 2023 in Nanuet, New York. (credit: Getty Images)

American fast food chain Wendy’s is planning to test dynamic pricing and AI menu features in 2025, reports Nation’s Restaurant News and Food & Wine. This means that prices for food items will automatically change throughout the day depending on demand, similar to “surge pricing” in rideshare apps like Uber and Lyft. The initiative was disclosed by Kirk Tanner, the CEO and president of Wendy’s, in a recent discussion with analysts.

According to Tanner, Wendy’s plans to invest approximately $20 million to install digital menu boards capable of displaying these real-time variable prices across all of its company-operated locations in the United States. An additional $10 million is earmarked over two years to enhance Wendy’s global system, which aims to improve order accuracy and upsell other menu items.

In conversation with Food & Wine, a spokesperson for Wendy’s confirmed the company’s commitment to this pricing strategy, describing it as part of a broader effort to grow its digital business. “Beginning as early as 2025, we will begin testing a variety of enhanced features on these digital menuboards like dynamic pricing, different offerings in certain parts of the day, AI-enabled menu changes and suggestive selling based on factors such as weather,” they said. “Dynamic pricing can allow Wendy’s to be competitive and flexible with pricing, motivate customers to visit and provide them with the food they love at a great value. We will test a number of features that we think will provide an enhanced customer and crew experience.”

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

The FBI and partners from 10 other countries are urging owners of Ubiquiti EdgeRouters to check their gear for signs they’ve been hacked and are being used to conceal ongoing malicious operations by Russian state hackers.

The Ubiquiti EdgeRouters make an ideal hideout for hackers. The inexpensive gear, used in homes and small offices, runs a version of Linux that can host malware that surreptitiously runs behind the scenes. The hackers then use the routers to conduct their malicious activities. Rather than using infrastructure and IP addresses that are known to be hostile, the connections come from benign-appearing devices hosted by addresses with trustworthy reputations, allowing them to receive a green light from security defenses.

Unfettered access

“In summary, with root access to compromised Ubiquiti EdgeRouters, APT28 actors have unfettered access to Linux-based operating systems to install tooling and to obfuscate their identity while conducting malicious campaigns,” FBI officials wrote in an advisory Tuesday.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo of “Willy’s Chocolate Experience” (inset), which did not match AI-generated promises, shown in the background. (credit: Stuart Sinclair)

On Saturday, event organizers shut down a Glasgow-based “Willy’s Chocolate Experience” after customers complained that the unofficial Wonka-inspired event, which took place in a sparsely decorated venue, did not match the lush AI-generated images listed on its official website (archive here). According to Sky News, police were called to the event, and “advice was given.”

“What an absolute shambles of an event,” wrote Stuart Sinclar on Facebook after paying 35 pounds per ticket for himself and his kids. “Took 2 minutes to get through to then see a queue of people surrounding the guy running it complaining … The kids received 2 jelly babies and a quarter of a can of Barrs limeade.”

The Willy’s Chocolate Experience website, which promises “a journey filled with wondrous creations and enchanting surprises at every turn,” features five AI-generated images (likely created with OpenAI’s DALL-E 3) that evoke a candy-filled fantasy wonderland inspired by the Willy Wonka universe and the recent Wonka film. But in reality, Sinclair was met with a nearly empty location with a few underwhelming decorations and a tiny bouncy castle. In one photo shared by Sinclair, a rainbow arch leads to a single yellow gummy bear and gum drop sitting on a bare concrete floor.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to dramatically surpass other AI video generators in capability. It seems that similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Two days after an international team of authorities struck a major blow at LockBit, one of the Internet’s most prolific ransomware syndicates, researchers have detected a new round of attacks that are installing malware associated with the group.

The attacks, detected in the past 24 hours, are exploiting two critical vulnerabilities in ScreenConnect, a remote desktop application sold by Connectwise. According to researchers at two security firms—SophosXOps and Huntress—attackers who successfully exploit the vulnerabilities go on to install LockBit ransomware and other post-exploit malware. It wasn’t immediately clear if the ransomware was the official LockBit version.

“We can’t publicly name the customers at this time but can confirm the malware being deployed is associated with LockBit, which is particularly interesting against the backdrop of the recent LockBit takedown,” John Hammond, principal security researcher at Huntress, wrote in an email. “While we can’t attribute this directly to the larger LockBit group, it is clear that LockBit has a large reach that spans tooling, various affiliate groups, and offshoots that have not been completely erased even with the major takedown by law enforcement.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background. (credit: Stability AI)

On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called “prompts” and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

Since 2022, we’ve seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI’s DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse. (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Generations from Gemini AI from the prompt, “Paint me a historically accurate depiction of a medieval British king.” (credit: @stratejake / X)

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

“We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” wrote Google in a statement Thursday morning.

As more people on X began to pile on Google for being “woke,” the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out, some of these inaccurate depictions “were essentially erasing the history of race and gender discrimination.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail