Category:

Editor’s Pick

Enlarge / Eight images used in the study; four of them are synthetic. Can you tell which ones? (Answers at bottom of the article.) (credit: Nightingale and Farid (2022))

A study published in the peer-reviewed journal Psychological Science on Monday found that AI-generated faces, particularly those representing white individuals, were perceived as more real than actual face photographs, reports The Guardian. The finding did not extend to images of people of color, likely due to AI models being trained predominantly on images of white individuals—a common bias that is well-known in machine learning research.

In the paper titled “AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones,” researchers from Australian National University, the University of Toronto, University of Aberdeen, and University College London coined the term in the paper’s title, hyperrealism, which they define as a phenomenon where people think AI-generated faces are more real than actual human faces.

In their experiments, the researchers presented white adults with a mix of 100 AI-generated and 100 real white faces, asking them to identify which were real and their confidence in their decision. Out of 124 participants, 66 percent of AI images were identified as human, compared to 51 percent for real images. This trend, however, was not observed in images of people of color, where both AI and real faces were judged as human about 51 percent of the time, irrespective of the participant’s race.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

On Tuesday, Intel pushed microcode updates to fix a high-severity CPU bug that has the potential to be maliciously exploited against cloud-based hosts.

The flaw, affecting virtually all modern Intel CPUs, causes them to “enter a glitch state where the normal rules don’t apply,” Tavis Ormandy, one of several security researchers inside Google who discovered the bug, reported. Once triggered, the glitch state results in unexpected and potentially serious behavior, most notably system crashes that occur even when untrusted code is executed within a guest account of a virtual machine, which, under most cloud security models, is assumed to be safe from such faults. Escalation of privileges is also a possibility.

Very strange behavior

The bug, tracked under the common name Reptar and the designation CVE-2023-23583, is related to how affected CPUs manage prefixes, which change the behavior of instructions sent by running software. Intel x64 decoding generally allows redundant prefixes—meaning those that don’t make sense in a given context—to be ignored without consequence. During testing in August, Ormandy noticed that the REX prefix was generating “unexpected results” when running on Intel CPUs that support a newer feature known as fast short repeat move, which was introduced in the Ice Lake architecture to fix microcoding bottlenecks.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A file photo of Tropical Storm Fiona as seen in a satellite image from 2022. (credit: Getty Images)

On Tuesday, the peer-reviewed journal Science published a study that shows how an AI meteorology model from Google DeepMind called GraphCast has significantly outperformed conventional weather forecasting methods in predicting global weather conditions up to 10 days in advance. The achievement suggests that future weather forecasting may become far more accurate, reports The Washington Post and Financial Times.

In the study, GraphCast demonstrated superior performance over the world’s leading conventional system, operated by the European Centre for Medium-range Weather Forecasts (ECMWF). In a comprehensive evaluation, GraphCast outperformed ECMWF’s system in 90 percent of 1,380 metrics, including temperature, pressure, wind speed and direction, and humidity at various atmospheric levels.

And GraphCast does all this quickly: “It predicts hundreds of weather variables, over 10 days at 0.25° resolution globally, in under one minute,” write the authors in the paper “Learning skillful medium-range global weather forecasting.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

What do Boeing, an Australian shipping company, the world’s largest bank, and one of the world’s biggest law firms have in common? All four have suffered cybersecurity breaches, most likely at the hands of teenage hackers, after failing to patch a critical vulnerability that security experts have warned of for more than a month, according to a post published Monday.

Besides the US jetliner manufacturer, the victims include DP World, the Australian branch of the Dubai-based logistics company DP World; Industrial and Commercial Bank of China; and Allen & Overy, a multinational law firm, according to Keven Beaumont, an independent security researcher with one of the most comprehensive views of the cybersecurity landscape. All four companies have confirmed succumbing to security incidents in recent days, and China’s ICBC has reportedly paid an undisclosed ransom in exchange for encryption keys to data that has been unavailable ever since.

Citing data allowing the tracking of ransomware operators and people familiar with the breaches, Beaumont said the four companies are among 10 victims he’s aware of currently being extorted by LockBit, among the world’s most prolific and damaging ransomware crime syndicates. All four of the companies, Beaumont said, were users of a networking product known as Citrix Netscaler and hadn’t patched against a critical vulnerability, despite a patch being available since October 10.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Nvidia H200 GPU covered with a fanciful blue explosion that figuratively represents raw compute power bursting forth in a glowing flurry. (credit: Nvidia | Benj Edwards)

On Monday, Nvidia announced the HGX H200 Tensor Core GPU, which utilizes the Hopper architecture to accelerate AI applications. It’s a follow-up of the H100 GPU, released last year and previously Nvidia’s most powerful AI GPU chip. If widely deployed, it could lead to far more powerful AI models—and faster response times for existing ones like ChatGPT—in the near future.

According to experts, lack of computing power (often called “compute”) has been a major bottleneck of AI progress this past year, hindering deployments of existing AI models and slowing the development of new ones. Shortages of powerful GPUs that accelerate AI models are largely to blame. One way to alleviate the compute bottleneck is to make more chips, but you can also make AI chips more powerful. That second approach may make the H200 an attractive product for cloud providers.

What’s the H200 good for? Despite the “G” in the “GPU” name, data center GPUs like this typically aren’t for graphics. GPUs are ideal for AI applications because they perform vast numbers of parallel matrix multiplications, which are necessary for neural networks to function. They are essential in the training portion of building an AI model and the “inference” portion, where people feed inputs into an AI model and it returns results.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

For the first time, researchers have demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic are vulnerable to complete compromise when naturally occurring computational errors occur while the connection is being established.

Underscoring the importance of their discovery, the researchers used their findings to calculate the private portion of almost 200 unique SSH keys they observed in public Internet scans taken over the past seven years. The researchers suspect keys used in IPsec connections could suffer the same fate. SSH is the cryptographic protocol used in secure shell connections that allows computers to remotely access servers, usually in security-sensitive enterprise environments. IPsec is a protocol used by virtual private networks that route traffic through an encrypted tunnel.

The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined. That translates to roughly 1 billion signatures out of the 3.2 billion signatures examined. Of the roughly 1 billion RSA signatures, about one in a million exposed the private key of the host.

Read 15 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Highly invasive malware targeting software developers is once again circulating in Trojanized code libraries, with the latest ones downloaded thousands of times in the last eight months, researchers said Wednesday.

Since January, eight separate developer tools have contained hidden payloads with various nefarious capabilities, security firm Checkmarx reported. The most recent one was released last month under the name “pyobfgood.” Like the seven packages that preceded it, pyobfgood posed as a legitimate obfuscation tool that developers could use to deter reverse engineering and tampering with their code. Once executed, it installed a payload, giving the attacker almost complete control of the developer’s machine. Capabilities include:

Exfiltrate detailed host information
Steal passwords from the Chrome web browser
Set up a keylogger
Download files from the victim’s system
Capture screenshots and record both screen and audio
Render the computer inoperative by ramping up CPU usage, inserting a batch script in the startup directory to shut down the PC, or forcing a BSOD error with a Python script
Encrypt files, potentially for ransom
Deactivate Windows Defender and Task Manager
Execute any command on the compromised host

In all, pyobfgood and the previous seven tools were installed 2,348 times. They targeted developers using the Python programming language. As obfuscators, the tools targeted Python developers with reason to keep their code secret because it had hidden capabilities, trade secrets, or otherwise sensitive functions. The malicious payloads varied from tool to tool, but they all were remarkable for their level of intrusiveness.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

A critical vulnerability in Atlassian’s Confluence enterprise server app that allows for malicious commands and reset servers is under active exploitation by threat actors in attacks that install ransomware, researchers said.

“Widespread exploitation of the CVE-2023-22518 authentication bypass vulnerability in Atlassian Confluence Server has begun, posing a risk of significant data loss,” Glenn Thorpe, senior director of security research and detection engineering at security firm GreyNoise, wrote on Mastodon on Sunday. “So far, the attacking IPs all include Ukraine in their target.”

He pointed to a page showing that between 12 am and 8 am on Sunday UTC (around 5 pm Saturday to 1 am Sunday Pacific Time), three different IP addresses began exploiting the critical vulnerability, which allows attackers to restore a database and execute malicious commands. The IPs have since stopped those attacks, but he said he suspected the exploits are continuing.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Monday at the OpenAI DevDay event, company CEO Sam Altman announced a major update to its GPT-4 language model called GPT-4 Turbo, which can process a much larger amount of text than GPT-4 and features a knowledge cutoff of April 2023. He also introduced APIs for DALL-E 3, GPT-4 Vision, and text-to-speech—and launched an “Assistants API” that makes it easier for developers to build assistive AI apps.

OpenAI hosted its first-ever developer event on November 6 in San Francisco called DevDay. During the opening keynote delivered by Altman in front of a small audience, the CEO showcased the wider impacts of its AI technology in the world, including helping people with tech accessibility. Altman shared some stats, saying that over 2 million developers are building apps using its APIs, over 92 percent of Fortune 500 companies are building on their platform, and that ChatGPT has over 100 million active weekly users.

At one point, Microsoft CEO Satya Nadella made a surprise appearance on the stage, talking with Altman about the deepening partnership between Microsoft and OpenAI and sharing some general thoughts about the future of the technology, which he thinks will empower people.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / “GPTs” will allow ChatGPT users to create custom AI assistants that serve different purposes. (credit: Getty Images)

On Monday, OpenAI announced “GPTs,” a new feature that allows ChatGPT users to create custom versions of its AI assistant that serve different roles or purposes. OpenAI will let users share GPT roles with others, and it plans to introduce a “GPT Store” later this month that will eventually share revenue with creators.

“Since launching ChatGPT, people have been asking for ways to customize ChatGPT to fit specific ways that they use it,” writes OpenAI in a release provided to Ars. “We launched Custom Instructions in July that let you set some preferences, but requests for more control kept coming.”

For example, in a screenshot of the GPTs interface provided by OpenAI, the upcoming GPT Store shows custom AI assistants called “Writing Coach,” “Sous Chef,” “Math Mentor,” and “Sticker Whiz” available for selection. The screenshot describes the GPTs as assistants designed to help with writing feedback, recipes, homework help, and turning your ideas into die-cut stickers.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail