Category:

Editor’s Pick

Enlarge / Mark Robinson, lieutenant governor of North Carolina and candidate for governor, delivers remarks prior to Republican presidential nominee former President Donald Trump speaking at a campaign event at Harrah’s Cherokee Center on August 14, 2024, in Asheville, North Carolina. (credit: Grant Baldwin via Getty Images)

On Thursday, CNN broke news about inflammatory comments made by Mark Robinson, the Republican nominee for governor of North Carolina, on a pornography website’s message board over a decade ago. After the allegations emerged, Mark Robinson played on what we call “deep doubt” and denied the comments were his words, claiming they were manufactured by AI.

“Look, I’m not going to get into the minutia about how somebody manufactured these salacious tabloid lies, but I can tell you this: There’s been over one million dollars spent on me through AI by a billionaire’s son who’s bound and determined to destroy me,” Robinson told CNN reporter Andrew Kaczynski in a televised interview. “The things that people can do with the Internet now is incredible. But what I can tell you is this: Again, these are not my words. This is simply tabloid trash being used as a distraction from the substantive issues that the people of this state are facing.”

The CNN investigation found that Robinson, currently serving as North Carolina’s lieutenant governor, used the username “minisoldr” on a website called “Nude Africa” between 2008 and 2012. CNN identified Robinson as the user by matching biographical details, a shared email address, and profile photos. The comments included Robinson referring to himself as a “black NAZI!” and expressing support for reinstating slavery, among other controversial comments.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Windows App runs on Windows, but also macOS, iOS/iPadOS, web browsers, and Android. (credit: Microsoft)

Microsoft announced today that it’s releasing a new app called Windows App as an app for Windows that allows users to run Windows and also Windows apps (it’s also coming to macOS, iOS, web browsers, and is in public preview for Android).

On most of those platforms, Windows App is a replacement for the Microsoft Remote Desktop app, which was used for connecting to a copy of Windows running on a remote computer or server—for some users and IT organizations, a relatively straightforward way to run Windows software on devices that aren’t running Windows or can’t run Windows natively.

The new name, though potentially confusing, attempts to sum up the app’s purpose: It’s a unified way to access your own Windows PCs with Remote Desktop access turned on, cloud-hosted Windows 365 and Microsoft Dev Box systems, and individual remotely hosted apps that have been provisioned by your work or school.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A coalition of law-enforcement agencies said it shut down a service that facilitated the unlocking of more than 1.2 million stolen or lost mobile phones so they could be used by someone other than their rightful owner.

The service was part of iServer, a phishing-as-a-service platform that has been operating since 2018. The Argentina-based iServer sold access to a platform that offered a host of phishing-related services through email, texts, and voice calls. One of the specialized services offered was designed to help people in possession of large numbers of stolen or lost mobile devices to obtain the credentials needed to bypass protections such as the lost mode for iPhones, which prevent a lost or stolen device from being used without entering its passcode.

Catering to low-skilled thieves

An international operation coordinated by Europol’s European Cybercrime Center said it arrested the Argentinian national that was behind iServer and identified more than 2,000 “unlockers” who had enrolled in the phishing platform over the years. Investigators ultimately found that the criminal network had been used to unlock more than 1.2 million mobile phones. Officials said they also identified 483,000 phone owners who had received messages phishing for credentials for their lost or stolen devices.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Cutting metal with lasers is hard, but even harder when you don’t know the worst-case timings of your code. (credit: Getty Images)

As is so often the case, a notable change in an upcoming Linux kernel is both historic and no big deal.

If you wanted to use “Real-Time Linux” for your audio gear, your industrial welding laser, or your Mars rover, you have had that option for a long time (presuming you didn’t want to use QNX or other alternatives). Universities started making their own real-time kernels in the late 1990s. A patch set, PREEMPT_RT, has existed since at least 2005. And some aspects of the real-time work, like NO_HZ, were long ago moved into the mainline kernel, enabling its use in data centers, cloud computing, or anything with a lot of CPUs.

But officialness still matters, and in the 6.12 kernel, PREEMPT_RT will likely be merged into the mainline. As noted by Steven Vaughan-Nichols at ZDNet, the final sign-off by Linus Torvalds occurred while he was attending Open Source Summit Europe. Torvalds wrote the original code for printk, a debugging tool that can pinpoint exact moments where a process crashes, but also introduces latency that runs counter to real-time computing. The Phoronix blog has tracked the progress of PREEMPT_RT into the kernel, along with the printk changes that allowed for threaded/atomic console support crucial to real-time mainlining.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: J Studios via Getty Images)

If you haven’t noticed by now, Big Tech companies have been making plans to invest in the infrastructure necessary to deliver generative AI products like ChatGPT (and beyond) to hundreds of millions of people around the world. That push involves building more AI-accelerating chips, more data centers, and even new nuclear plants to power those data centers, in some cases.

Along those lines, Microsoft, BlackRock, Global Infrastructure Partners (GIP), and MGX announced a massive new AI investment partnership on Tuesday called the Global AI Infrastructure Investment Partnership (GAIIP). The partnership initially aims to raise $30 billion in private equity capital, which could later turn into $100 billion in total investment when including debt financing.

The group will invest in data centers and supporting power infrastructure for AI development. “The capital spending needed for AI infrastructure and the new energy to power it goes beyond what any single company or government can finance,” Microsoft President Brad Smith said in a statement.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: gremlin via Getty Images)

For the past few years, a conspiracy theory called “Dead Internet theory” has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it’s bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It’s available on the iPhone app store, but so far, it’s picking up pointed criticism.

After its creator announced SocialAI as “a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make,” computer security specialist Ian Coldwater quipped on X, “This sounds like actual hell.” Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: “I don’t mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell.”

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards / Malte Mueller via Getty Images)

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate’s vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate’s vice chair, stated in a press release that AI could help develop “cutting edge, capital efficient content creation opportunities.” He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

The FBI has dismantled a massive network of compromised devices that Chinese state-sponsored hackers have used for four years to mount attacks on government agencies, telecoms, defense contractors, and other targets in the US and Taiwan.

The botnet was made up primarily of small office and home office routers, surveillance cameras, network-attached storage, and other Internet-connected devices located all over the world. Over the past four years, US officials said, 260,000 such devices have cycled through the sophisticated network, which is organized in three tiers that allow the botnet to operate with efficiency and precision. At its peak in June 2023, Raptor Train, as the botnet is named, consisted of more than 60,000 commandeered devices, according to researchers from Black Lotus Labs, making it the largest China state botnet discovered to date.

Burning down the house

Raptor Train is the second China state-operated botnet US authorities have taken down this year. In January, law enforcement officials covertly issued commands to disinfect Internet of Things devices that hackers backed by the Chinese government had taken over without the device owners’ knowledge. The Chinese hackers, part of a group tracked as Volt Typhoon, used the botnet for more than a year as a platform to deliver exploits that burrowed deep into the networks of targets of interest. Because the attacks appear to originate from IP addresses with good reputations, they are subjected to less scrutiny from network security defenses, making the bots an ideal delivery proxy. Russia-state hackers have also been caught assembling large IoT botnets for the same purposes.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it’s an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant’s “About this image” feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail