Category:

Editor’s Pick

Enlarge / A movable robotic face covered with living human skin cells. (credit: Takeuchi et al.)

In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.

The researchers note that using living skin tissue as a robot covering has benefits, as it’s flexible enough to convey emotions and can potentially repair itself. “As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing,” wrote the researchers in the study.

Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled “Perforation-type anchors inspired by skin ligament for robotic face covered with living skin,” which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An illustration created by OpenAI. (credit: OpenAI)

On Thursday, OpenAI researchers unveiled CriticGPT, a new AI model designed to identify mistakes in code generated by ChatGPT. It aims to enhance the process of making AI systems behave in ways humans want (called “alignment”) through Reinforcement Learning from Human Feedback (RLHF), which helps human reviewers make large language model (LLM) outputs more accurate.

As outlined in a new research paper called “LLM Critics Help Catch LLM Bugs,” OpenAI created CriticGPT to act as an AI assistant to human trainers who review programming code generated by the ChatGPT AI assistant. CriticGPT—based on the GPT-4 family of LLMS—analyzes the code and points out potential errors, making it easier for humans to spot mistakes that might otherwise go unnoticed. The researchers trained CriticGPT on a dataset of code samples with intentionally inserted bugs, teaching it to recognize and flag various coding errors.

The researchers found that CriticGPT’s critiques were preferred by annotators over human critiques in 63 percent of cases involving naturally occurring LLM errors and that human-machine teams using CriticGPT wrote more comprehensive critiques than humans alone while reducing confabulation (hallucination) rates compared to AI-only critiques.

Developing an automated critic

The development of CriticGPT involved training the model on a large number of inputs containing deliberately inserted mistakes. Human trainers were asked to modify code written by ChatGPT, introducing errors and then providing example feedback as if they had discovered these bugs. This process allowed the model to learn how to identify and critique various types of coding errors.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Mac malware that steals passwords, cryptocurrency wallets, and other sensitive data has been spotted circulating through Google ads, making it at least the second time in as many months the widely used ad platform has been abused to infect web surfers.

The latest ads, found by security firm Malwarebytes on Monday, promote Mac versions of Arc, an unconventional browser that became generally available for the macOS platform last July. The listing promises users a “calmer, more personal” experience that includes less clutter and distractions, a marketing message that mimics the one communicated by The Browser Company, the start-up maker of Arc.

When verified isn’t verified

According to Malwarebytes, clicking on the ads redirected Web surfers to arc-download[.]com, a completely fake Arc browser page that looks nearly identical to the real one.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Al Michaels looks on prior to the game between the Minnesota Vikings and Philadelphia Eagles at Lincoln Financial Field on September 14, 2023, in Philadelphia, Pennsylvania. (credit: Getty Images)

On Wednesday, NBC announced plans to use an AI-generated clone of famous sports commentator Al Michaels‘ voice to narrate daily streaming video recaps of the 2024 Summer Olympics in Paris, which start on July 26. The AI-powered narration will feature in “Your Daily Olympic Recap on Peacock,” NBC’s streaming service. But this new, high-profile use of voice cloning worries critics, who say the technology may muscle out upcoming sports commentators by keeping old personas around forever.

NBC says it has created a “high-quality AI re-creation” of Michaels’ voice, trained on Michaels’ past NBC appearances to capture his distinctive delivery style.

The veteran broadcaster, revered in the sports commentator world for his iconic “Do you believe in miracles? Yes!” call during the 1980 Winter Olympics, has been covering sports on TV since 1971, including a high-profile run of play-by-play coverage of NFL football games for both ABC and NBC since the 1980s. NBC dropped him from NFL coverage in 2023, however, possibly due to his age.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

A critical vulnerability recently discovered in a widely used piece of software is putting huge swaths of the Internet at risk of devastating hacks, and attackers have already begun actively trying to exploit it in real-world attacks, researchers warn.

The software, known as MOVEit and sold by Progress Software, allows enterprises to transfer and manage files using various specifications, including SFTP, SCP, and HTTP protocols and in ways that comply with regulations mandated under PCI and HIPAA. At the time this post went live, Internet scans indicated it was installed inside almost 1,800 networks around the world, with the biggest number in the US. A separate scan performed Tuesday by security firm Censys found 2,700 such instances.

Causing mayhem with a null string

Last year, a critical MOVEit vulnerability led to the compromise of more than 2,300 organizations, including Shell, British Airways, the US Department of Energy, and Ontario’s government birth registry, BORN Ontario, the latter of which led to the compromise of information for 3.4 million people.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora. (credit: Toys R Us)

On Monday, Toys “R” Us announced that it had partnered with an ad agency called Native Foreign to create what it calls “the first-ever brand film using OpenAI’s new text-to-video tool, Sora.” OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys “R” Us founder Charles Lazarus using AI-generated video clips.

“We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that’s gaining global attention,” wrote Toys “R” Us on its website. “Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys “R” Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s.”

The company says that The Origin of Toys “R” Us commercial was co-produced by Toys “R” Us Studios President Kim Miller Olko as executive producer and Native Foreign’s Nik Kleverov as director. “Charles Lazarus was a visionary ahead of his time, and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” Miller Olko said in a statement.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Illustration of a brain inside of a light bulb. (credit: Getty Images)

Researchers claim to have developed a new way to run AI language models more efficiently by eliminating matrix multiplication from the process. This fundamentally redesigns neural network operations that are currently accelerated by GPU chips. The findings, detailed in a recent preprint paper from researchers at the University of California Santa Cruz, UC Davis, LuxiTech, and Soochow University, could have deep implications for the environmental impact and operational costs of AI systems.

Matrix multiplication (often abbreviated to “MatMul”) is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. That ability momentarily made Nvidia the most valuable company in the world last week; the company currently holds an estimated 98 percent market share for data center GPUs, which are commonly used to power AI systems like ChatGPT and Google Gemini.

In the new paper, titled “Scalable MatMul-free Language Modeling,” the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU’s power draw). The implication is that a more efficient FPGA “paves the way for the development of more efficient and hardware-friendly architectures,” they write.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

WordPress plugins running on as many as 36,000 websites have been backdoored in a supply-chain attack with unknown origins, security researchers said on Monday.

So far, five plugins are known to be affected in the campaign, which was active as recently as Monday morning, researchers from security firm Wordfence reported. Over the past week, unknown threat actors have added malicious functions to updates available for the plugins on WordPress.org, the official site for the open source WordPress CMS software. When installed, the updates automatically create an attacker-controlled administrative account that provides full control over the compromised site. The updates also add content designed to goose search results.

Poisoning the well

“The injected malicious code is not very sophisticated or heavily obfuscated and contains comments throughout making it easy to follow,” the researchers wrote. “The earliest injection appears to date back to June 21st, 2024, and the threat actor was still actively making updates to plugins as recently as 5 hours ago.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson’s music. (credit: Getty Images)

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., “a dubstep song about Linus Torvalds”).

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies’ use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of “3.5” models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. “This model is really, really good,” wrote independent AI researcher Simon Willison on X. “I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump).”

As we’ve written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail