Category:

Editor’s Pick

Enlarge / Researchers write, “In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person.” (credit: Adejumoke Owolabi)

In 2024, it’s almost trivial to create realistic AI-generated images of people, which has led to fears about how these deceptive images might be detected. Researchers at the University of Hull recently unveiled a novel method for detecting AI-generated deepfake images by analyzing reflections in human eyes. The technique, presented at the Royal Astronomical Society’s National Astronomy Meeting last week, adapts tools used by astronomers to study galaxies for scrutinizing the consistency of light reflections in eyeballs.

Adejumoke Owolabi, an MSc student at the University of Hull, headed the research under the guidance of Dr. Kevin Pimbblet, professor of astrophysics.

Their detection technique is based on a simple principle: A pair of eyes being illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eyeball. Many AI-generated images created to date don’t take eyeball reflections into account, so the simulated light reflections are often inconsistent between each eye.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: hdaniel)

Airlines, payment processors, 911 call centers, TV networks, and other businesses have been scrambling this morning after a buggy update to CrowdStrike’s Falcon security software caused Windows-based systems to crash with a dreaded blue screen of death (BSOD) error message.

We’re updating our story about the outage with new details as we have them. Microsoft and CrowdStrike both say that “the affected update has been pulled,” so what’s most important for IT admins in the short term is getting their systems back up and running again. According to guidance from Microsoft, fixes range from annoying but easy to incredibly time-consuming and complex, depending on the number of systems you have to fix and the way your systems are configured.

Microsoft’s Azure status page outlines several fixes. The first and easiest is simply to try to reboot affected machines over and over, which gives affected machines multiple chances to try to grab CrowdStrike’s non-broken update before the bad driver can cause the BSOD. Microsoft says that some of its customers have had to reboot their systems as many as 15 times to pull down the update.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A passenger sits on the floor as long queues form at the check-in counters at Ninoy Aquino International Airport, on July 19, 2024 in Manila, Philippines. (credit: Ezra Acayan/Getty Images)

Millions of people outside the IT industry are learning what CrowdStrike is today, and that’s a real bad thing. Meanwhile, Microsoft is also catching blame for global network outages, and between the two, it’s unclear as of Friday morning just who caused what.

After cybersecurity firm CrowdStrike shipped an update to its Falcon Sensor software that protects mission critical systems, Blue Screens of Death started taking down Windows-based systems. The problems started in Australia and followed the dateline from there. TV networks, 911 call centers, and even the Paris Olympics were affected. Banks and financial systems in India, South Africa, Thailand, and other countries fell as computers suddenly crashed. Some individual workers discovered that their work-issued laptops were booting to blue screens on Friday morning.

Airlines, never the most agile of networks, were particularly hard-hit, with American Airlines, United, Delta, and Frontier among the US airlines overwhelmed Friday morning.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

You have to read the headline on Nvidia’s latest GPU announcement slowly, parsing each clause as it arrives.

“Nvidia transitions fully” sounds like real commitment, a burn-the-boats call. “Towards open-source GPU,” yes, evoking the company’s “first step” announcement a little over two years ago, so this must be progress, right? But, back up a word here, then finish: “GPU kernel modules.”

So, Nvidia has “achieved equivalent or better application performance with our open-source GPU kernel modules,” and added some new capabilities to them. And now most of Nvidia’s modern GPUs will default to using open source GPU kernel modules, starting with driver release R560, with dual GPL and MIT licensing. But Nvidia has moved most of its proprietary functions into a proprietary, closed-source firmware blob. The parts of Nvidia’s GPUs that interact with the broader Linux system are open, but the user-space drivers and firmware are none of your or the OSS community’s business.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards)

On Thursday, OpenAI announced the launch of GPT-4o mini, a new, smaller version of its latest GPT-4o AI language model that will replace GPT-3.5 Turbo in ChatGPT, reports CNBC and Bloomberg. It will be available today for free users and those with ChatGPT Plus or Team subscriptions and will come to ChatGPT Enterprise next week.

GPT-4o mini will reportedly be multimodal like its big brother (which launched in May), interpreting images and text and also being able to use DALL-E 3 to generate images.

OpenAI told Bloomberg that GPT-4o mini will be the company’s first AI model to use a technique called “instruction hierarchy” that will make an AI model prioritize some instructions over others (such as from a company), which may make it more difficult for people to perform prompt injection attacks or jailbreaks that subvert built-in fine-tuning or directives given by a system prompt.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

Cisco on Wednesday disclosed a maximum-security vulnerability that allows remote threat actors with no authentication to change the password of any user, including those of administrators with accounts, on Cisco Smart Software Manager On-Prem devices.

The Cisco Smart Software Manager On-Prem resides inside the customer premises and provides a dashboard for managing licenses for all Cisco gear in use. It’s used by customers who can’t or don’t want to manage licenses in the cloud, as is more common.

In a bulletin, Cisco warns that the product contains a vulnerability that allows hackers to change any account’s password. The severity of the vulnerability, tracked as CVE-2024-20419, is rated 10, the maximum score.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Former US President Donald Trump during a campaign event at Trump National Doral Golf Club in Miami, Florida, US, on Tuesday, July 9, 2024. (credit: Getty Images)

Allies of former President Donald Trump have reportedly drafted a sweeping AI executive order that aims to boost military technology and reduce regulations on AI development, The Washington Post reported. The plan, which includes a section titled “Make America First in AI,” signals a dramatic potential shift in AI policy if Trump returns to the White House in 2025.

The draft order, obtained by the Post, outlines a series of “Manhattan Projects” to advance military AI capabilities. It calls for an immediate review of what it terms “unnecessary and burdensome regulations” on AI development. The approach marks a contrast to the Biden administration’s executive order from last October, which imposed new safety testing requirements on advanced AI systems.

The proposed order suggests creating “industry-led” agencies to evaluate AI models and safeguard systems from foreign threats. This approach would likely benefit tech companies already collaborating with the Pentagon on AI projects, such as Palantir, Anduril, and Scale AI. Executives from these firms have reportedly expressed support for Trump.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Rite Aid logo displayed at one of its stores. (credit: Getty Images)

Rite Aid, the third biggest US drug store chain, said that more than 2.2 million of its customers have been swept into a data breach that stole personal information, including driver’s license numbers, addresses, and dates of birth.

The company said in mandatory filings with the attorneys general of states including Maine, Massachusetts, Vermont, and Oregon that the stolen data was associated with purchases or attempted purchases of retail products made between June 6, 2017, and July 30, 2018. The data provided included the purchaser’s name, address, date of birth, and driver’s license number or other form of government-issued ID. No social security numbers, financial information, or patient information was included.

“On June 6, 2024, an unknown third party impersonated a company employee to compromise their business credentials and gain access to certain business systems,” the filing stated. “We detected the incident within 12 hours and immediately launched an internal investigation to terminate the unauthorized access, remediate affected systems and ascertain if any customer data was impacted.”

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, former OpenAI researcher Andrej Karpathy announced the formation of a new AI learning platform called Eureka Labs. The venture aims to create an “AI native” educational experience, with its first offering focused on teaching students how to build their own large language model (LLM).

“It’s still early days but I wanted to announce the company so that I can build publicly instead of keeping a secret that isn’t,” Karpathy wrote on X.

While the idea of using AI in education isn’t particularly new, Karpathy’s approach hopes to pair expert-designed course materials with an AI-powered teaching assistant based on an LLM, aiming to provide personalized guidance at scale. This combination seeks to make high-quality education more accessible to a global audience.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: BeeBright / Getty Images / iStockphoto)

Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers’ computers when executed.

The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon’s S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js. That file provided what appeared to be benign code and three JPG images that were processed during package installation. One of those images contained code fragments that, when reconstructed, formed code for backdooring the developer device.

Growing sophistication

“We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days,” researchers from Phylum, the security firm that spotted the packages, wrote. “This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail