Category:

Editor’s Pick

Enlarge (credit: Getty Images)

Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented backdoor, the software maker disclosed Monday.

When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company’s advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days. Microsoft said at the time that it learned of the vulnerability from the US National Security Agency.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A sample image from Microsoft for “VASA-1: Lifelike Audio-Driven Talking Faces
Generated in Real Time.” (credit: Microsoft)

On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

“It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors,” reads the abstract of the accompanying research paper titled, “VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time.” It’s the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo.

The VASA framework (short for “Visual Affective Skills Animator”) uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Benj Edwards)

On Thursday, Meta unveiled early versions of its Llama 3 open-weights AI model that can be used to power text composition, code generation, or chatbots. It also announced that its Meta AI Assistant is now available on a website and is going to be integrated into its major social media apps, intensifying the company’s efforts to position its products against other AI assistants like OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini.

Like its predecessor, Llama 2, Llama 3 is notable for being a freely available, open-weights large language model (LLM) provided by a major AI company. Llama 3 technically does not quality as “open source” because that term has a specific meaning in software (as we have mentioned in other coverage), and the industry has not yet settled on terminology for AI model releases that ship either code or weights with restrictions (you can read Llama 3’s license here) or that ship without providing training data. We typically call these releases “open weights” instead.

At the moment, Llama 3 is available in two parameter sizes: 8 billion (8B) and 70 billion (70B), both of which are available as free downloads through Meta’s website with a sign-up. Llama 3 comes in two versions: pre-trained (basically the raw, next-token-prediction model) and instruction-tuned (fine-tuned to follow user instructions). Each has a 8,192 token context limit.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Password-manager LastPass users were recently targeted by a convincing phishing campaign that used a combination of email, SMS, and voice calls to trick targets into divulging their master passwords, company officials said.

The attackers used an advanced phishing-as-a-service kit discovered in February by researchers from mobile security firm Lookout. Dubbed CryptoChameleon for its focus on cryptocurrency accounts, the kit provides all the resources needed to trick even relatively savvy people into believing the communications are legitimate. Elements include high-quality URLs, a counterfeit single sign-on page for the service the target is using, and everything needed to make voice calls or send emails or texts in real time as targets are visiting a fake site. The end-to-end service can also bypass multi-factor authentication in the event a target is using the protection.

LastPass in the crosshairs

Lookout said that LastPass was one of dozens of sensitive services or sites CryptoChameleon was configured to spoof. Others targeted included the Federal Communications Commission, Coinbase and other cryptocurrency exchanges, and email, password management, and single sign-on services including Okta, iCloud, and Outlook. When Lookout researchers accessed a database one CryptoChameleon subscriber used, they found that a high percentage of the contents collected in the scams appeared to be legitimate email addresses, passwords, one-time-password tokens, password reset URLs, and photos of driver’s licenses. Typically, such databases are filled with junk entries.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image from DALL-E 2 created with the prompt “A painting by Grant Wood of an astronaut couple, american gothic style.” (credit: AI Pictures That Go Hard / X)

When OpenAI’s DALL-E 2 debuted on April 6, 2022, the idea that a computer could create relatively photorealistic images on demand based on just text descriptions caught a lot of people off guard. The launch began an innovative and tumultuous period in AI history, marked by a sense of wonder and a polarizing ethical debate that reverberates in the AI space to this day.

Last week, OpenAI turned off the ability for new customers to purchase generation credits for the web version of DALL-E 2, effectively killing it. From a technological point of view, it’s not too surprising that OpenAI recently began winding down support for the service. The 2-year-old image generation model was groundbreaking for its time, but it has since been surpassed by DALL-E 3’s higher level of detail, and OpenAI has recently begun rolling out DALL-E 3 editing capabilities.

But for a tight-knit group of artists and tech enthusiasts who were there at the start of DALL-E 2, the service’s sunset marks the bittersweet end of a period where AI technology briefly felt like a magical portal to boundless creativity. “The arrival of DALL-E 2 was truly mind-blowing,” illustrator Douglas Bonneville told Ars in an interview. “There was an exhilarating sense of unlimited freedom in those first days that we all suspected AI was going to unleash. It felt like a liberation from something into something else, but it was never clear exactly what.”

Read 42 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: da-kuk/Getty)

Kremlin-backed actors have stepped up efforts to interfere with the US presidential election by planting disinformation and false narratives on social media and fake news sites, analysts with Microsoft reported Wednesday.

The analysts have identified several unique influence-peddling groups affiliated with the Russian government seeking to influence the election outcome, with the objective in large part to reduce US support of Ukraine and sow domestic infighting. These groups have so far been less active during the current election cycle than they were during previous ones, likely because of a less contested primary season.

Stoking divisions

Over the past 45 days, the groups have seeded a growing number of social media posts and fake news articles that attempt to foment opposition to US support of Ukraine and stoke divisions over hot-button issues such as election fraud. The influence campaigns also promote questions about President Biden’s mental health and corrupt judges. In all, Microsoft has tracked scores of such operations in recent weeks.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

Broadcom CEO Hock Tan this week publicized some concessions aimed at helping customers and partners ease into VMware’s recent business model changes. Tan reiterated that the controversial changes, like the end of perpetual licensing, aren’t going away. But amid questioning from antitrust officials in the European Union (EU), Tan announced that the company has already given support extensions for some VMware perpetual license holders.

Broadcom closed its $69 billion VMware acquisition in November. One of its first moves was ending VMware perpetual license sales in favor of subscriptions. Since December, Broadcom also hasn’t sold Support and Subscription renewals for VMware perpetual licenses.

In a blog post on Monday, Tan admitted that this shift requires “a change in the timing of customers’ expenditures and the balance of those expenditures between capital and operating spending.” As a result, Broadcom has “given support extensions to many customers who came up for renewal while these changes were rolling out.” Tan didn’t specify how Broadcom determined who is eligible for an extension or for how long. However, the executive’s blog is the first time Broadcom has announced such extensions and opens the door to more extension requests.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Cans of Tab diet soda on display in 2011. Tab was discontinued in 2020. There has never been a soda named “Spaces” that had a cult following. (credit: Getty Images)

Anybody can contribute to the Linux kernel, but any person’s commit suggestion can become the subject of the kernel’s master and namesake, Linus Torvalds. Torvalds is famously not overly committed to niceness, though he has been working on it since 2018. You can see glimpses of this newer, less curse-laden approach in how Torvalds recently addressed a commit with which he vehemently disagreed. It involves tabs.

The commit last week changed exactly one thing on one line, replacing a tab character with a space: “It helps Kconfig parsers to read file without error.” Torvalds responded with a commit of his own, as spotted by The Register, which would “add some hidden tabs on purpose.” Trying to smooth over a tabs-versus-spaces matter seemed to awaken Torvalds to the need to have tab-detecting failures be “more obvious.” Torvalds would have added more, he wrote, but didn’t “want to make things uglier than necessary. But it *might* be necessary if it turns out we see more of this kind of silly tooling.”

If you’ve read this far and don’t understand what’s happening, please allow me, a failed CS minor, to offer a quick explanation: Tabs Versus Spaces will never be truly resolved, codified, or set right by standards, and the energy spent on the issue over time could, if harnessed, likely power one or more small nations. Still, the Linux kernel has its own coding style, and it directly cites “K&R,” or Kernighan & Ritchie, the authors of the coding bible The C Programming Language, which is a tabs book. If you are submitting kernel code, it had better use tabs (eight-character tabs, ideally, though that is tied in part to teletype and line-printer history).

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Matejmo | Getty Images)

Cisco’s Talos security team is warning of a large-scale credential compromise campaign that’s indiscriminately assailing networks with login attempts aimed at gaining unauthorized access to VPN, SSH, and web application accounts.

The login attempts use both generic usernames and valid usernames targeted at specific organizations. Cisco included a list of more than 2,000 usernames and almost 100 passwords used in the attacks, along with nearly 4,000 IP addresses sending the login traffic. The IP addresses appear to originate from TOR exit nodes and other anonymizing tunnels and proxies. The attacks appear to be indiscriminate and opportunistic rather than aimed at a particular region or industry.

“Depending on the target environment, successful attacks of this type may lead to unauthorized network access, account lockouts, or denial-of-service conditions,” Talos researchers wrote Tuesday. “The traffic related to these attacks has increased with time and is likely to continue to rise.”

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, the UK government announced a new law targeting the creation of AI-generated sexually explicit deepfake images. Under the legislation, which has not yet been passed, offenders would face prosecution and an unlimited fine, even if they do not widely share the images but create them with the intent to distress the victim. The government positions the law as part of a broader effort to enhance legal protections for women.

Over the past decade, the rise of deep learning image synthesis technology has made it increasingly easy for people with a consumer PC to create misleading pornography by swapping out the faces of the performers with someone else who has not consented to the act. That practice spawned the term “deepfake” around 2017, named after a Reddit user named “deepfakes” that shared AI-faked porn on the service. Since then, the term has grown to encompass completely new images and video synthesized entirely from scratch, created from neural networks that have been trained on images of the victim.

The problem isn’t unique to the UK. In March, deepfake nudes of female middle school classmates in Florida led to charges against two boys ages 13 and 14. The rise of open source image synthesis models like Stable Diffusion since 2022 has increased the urgency among regulators in the US to attempt to contain (or at least punish) the act of creating non-consensual deepfakes. The UK government is on a similar mission.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail