Category:

Editor’s Pick

Enlarge / Shares in SentinelOne and Palo Alto Networks have risen since July’s IT outage, while CrowdStrike has shed almost a quarter of its market value. (credit: Getty Images)

CrowdStrike’s president hit out at “shady” efforts by its cyber security rivals to scare its customers and steal market share in the month since its botched software update sparked a global IT outage.

Michael Sentonas told the Financial Times that attempts by competitors to use the July 19 disruption to promote their own products were “misguided.”

After criticism from rivals including SentinelOne and Trellix, the CrowdStrike executive said no vendor could “technically” guarantee that their own software would never cause a similar incident.

Read 25 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Last Tuesday, loads of Linux users—many running packages released as early as this year—started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: “Something has gone seriously wrong.”

The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devices. The vulnerability, with a severity rating of 8.6 out of 10, made it possible for hackers to bypass secure boot, the industry standard for ensuring that devices running Windows or other operating systems don’t load malicious firmware or software during the bootup process. CVE-2022-2601 was discovered in 2022, but for unclear reasons, Microsoft patched it only last Tuesday.

Multiple distros, both new and old, affected

Tuesday’s update left dual-boot devices—meaning those configured to run both Windows and Linux—no longer able to boot into the latter when Secure Boot was enforced. When users tried to load Linux, they received the message: “Verifying shim SBAT data failed: Security Policy Violation. Something has gone seriously wrong: SBAT self-check failed: Security Policy Violation.” Almost immediately support and discussion forums lit up with ​​reports of the failure.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Still of Procreate CEO James Cuda from a video posted to X. (credit: Procreate)

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”

In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A Windows zero-day vulnerability recently patched by Microsoft was exploited by hackers working on behalf of the North Korean government so they could install custom malware that’s exceptionally stealthy and advanced, researchers reported Monday.

The vulnerability, tracked as CVE-2024-38193, was one of six zero-days—meaning vulnerabilities known or actively exploited before the vendor has a patch—fixed in Microsoft’s monthly update release last Tuesday. Microsoft said the vulnerability—in a class known as a “use after free”—was located in AFD.sys, the binary file for what’s known as the ancillary function driver and the kernel entry point for the Winsock API. Microsoft warned that the zero-day could be exploited to give attackers system privileges, the maximum system rights available in Windows and a required status for executing untrusted code.

Lazarus gets access to the Windows kernel

Microsoft warned at the time that the vulnerability was being actively exploited but provided no details about who was behind the attacks or what their ultimate objective was. On Monday, researchers with Gen—the security firm that discovered the attacks and reported them privately to Microsoft—said the threat actors were part of Lazarus, the name researchers use to track a hacking outfit backed by the North Korean government.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs. (credit: BiliBili)

It’s no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.

In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.

This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. “This is high-level performance arts,” wrote one X user. “art is imitating life imitating ai, almost shedded a tear.” Another commented, “I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work.”

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024. (credit: Getty Images)

Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.

APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.

Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image of Donald Trump and catgirls created with Grok, which uses the Flux image synthesis model. (credit: BEAST / X)

On Tuesday, Elon Musk’s AI company xAI announced the beta release of two new language models, Grok-2 and Grok-2 mini, available to subscribers of his social media platform X (formerly Twitter). The models are also linked to the recently released Flux image synthesis model, which allows X users to create largely uncensored photorealistic images that can be shared on the site.

“Flux, accessible through Grok, is an excellent text-to-image generator, but it is also really good at creating fake photographs of real locations and people, and sending them right to Twitter,” wrote frequent AI commentator Ethan Mollick on X. “Does anyone know if they are watermarking these in any way? It would be a good idea.”

In a report posted earlier today, The Verge noted that Grok’s image generation capabilities appear to have minimal safeguards, allowing users to create potentially controversial content. According to their testing, Grok produced images depicting political figures in compromising situations, copyrighted characters, and scenes of violence when prompted.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Moor Studio via Getty Images)

On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called “The AI Scientist” that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem.

“In one run, it edited the code to perform a system call to run itself,” wrote the researchers on Sakana AI’s blog post. “This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period.”

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call “the issue of safe code execution” in more depth.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A Waymo self-driving car in front of Google’s San Francisco headquarters, San Francisco, California, June 7, 2024. (credit: Getty Images)

Silicon Valley’s latest disruption? Your sleep schedule. On Saturday, NBC Bay Area reported that San Francisco’s South of Market residents are being awakened throughout the night by Waymo self-driving cars honking at each other in a parking lot. No one is inside the cars, and they appear to be automatically reacting to each other’s presence.

Videos provided by residents to NBC show Waymo cars filing into the parking lot and attempting to back into spots, which seems to trigger honking from other Waymo vehicles. The automatic nature of these interactions—which seem to peak around 4 am every night—has left neighbors bewildered and sleep-deprived.

NBC Bay Area’s report: “Waymo cars keep SF neighborhood awake.”

According to NBC, the disturbances began several weeks ago when Waymo vehicles started using a parking lot off 2nd Street near Harrison Street. Residents in nearby high-rise buildings have observed the autonomous vehicles entering the lot to pause between rides, but the cars’ behavior has become a source of frustration for the neighborhood.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A still video capture of X user João Fiadeiro replacing his face with J.D. Vance in a test of Deep-Live-Cam.

Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren’t perfect, the software shows how quickly the tech is developing—and how the capability to deceive others remotely is getting dramatically easier over time.

The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub’s trending repositories list (it’s currently at No. 4 as of this writing), where it is available for download for free.

Weird how all the major innovations coming out of tech lately are under the Fraud skill tree,” wrote illustrator Corey Brickley in an X thread reacting to an example video of Deep-Live-Cam in action. In another post, he wrote, “Nice remember to establish code words with your parents everyone,” referring to the potential for similar tools to be used for remote deception—and the concept of using a safe word, shared among friends and family, to establish your true identity.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail