Category:

Editor’s Pick

Enlarge / In this still captured from a video provided by Nvidia, a simulated robot hand learns pen tricks, trained by Eureka, using simultaneous trials. (credit: Nvidia)

On Friday, researchers from Nvidia, UPenn, Caltech, and the University of Texas at Austin announced Eureka, an algorithm that uses OpenAI’s GPT-4 language model for designing training goals (called “reward functions”) to enhance robot dexterity. The work aims to bridge the gap between high-level reasoning and low-level motor control, allowing robots to learn complex tasks rapidly using massively parallel simulations that run through trials simultaneously. According to the team, Eureka outperforms human-written reward functions by a substantial margin.

Before robots can interact with the real world successfully, they need to learn how to move their robot bodies to achieve goals—like picking up objects or moving. Instead of making a physical robot try and fail one task at a time to learn in a lab, researchers at Nvidia have been experimenting with using video game-like computer worlds (thanks to platforms called Isaac Sim and Isaac Gym) that simulate three-dimensional physics. These allow for massively parallel training sessions to take place in many virtual worlds at once, dramatically speeding up training time.

“Leveraging state-of-the-art GPU-accelerated simulation in Nvidia Isaac Gym,” writes Nvidia on its demonstration page, “Eureka is able to quickly evaluate the quality of a large batch of reward candidates, enabling scalable search in the reward function space.” They call it “rapid reward evaluation via massively parallel reinforcement learning.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

From the warm-and-fuzzy files comes this feel-good Friday post, chronicling this week’s takedown of two hated ransomware groups. One vanished on Tuesday, allegedly after being hacked by a group claiming allegiance to Ukraine. The other was taken out a day later thanks to an international police dragnet.

The first group, calling itself Trigona, saw the content on its dark web victim naming-and-shaming site pulled down and replaced with a banner proclaiming: “Trigona is gone! The servers of Trigona ransomware gang has been infiltrated and wiped out.” An outfit calling itself Ukrainian Cyber Alliance took credit and included the tagline: “disrupting Russian criminal enterprises (both public and private) since 2014.”

Poor operational security

A social media post from a user claiming to be a Ukrainian Cyber Alliance press secretary said his group targeted ransomware groups partly because they consider themselves out of reach of Western law enforcement.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Identity and authentication management provider Okta said hackers managed to view private customer information after gaining access to credentials to its customer support management system.

“The threat actor was able to view files uploaded by certain Okta customers as part of recent support cases,” Okta Chief Security Officer David Bradbury said Friday. He suggested those files comprised HTTP archive, or HAR, files, which company support personnel use to replicate customer browser activity during troubleshooting sessions.

“HAR files can also contain sensitive data, including cookies and session tokens, that malicious actors can use to impersonate valid users,” Bradbury wrote. “Okta has worked with impacted customers to investigate, and has taken measures to protect our customers, including the revocation of embedded session tokens. In general, Okta recommends sanitizing all credentials and cookies/session tokens within a HAR file before sharing it.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Not long after OpenAI first unveiled its DALL-E 3 AI image generator integrated into ChatGPT earlier this month, some users testing the feature began noticing bugs in the ChatGPT app that revealed internal prompts shared between the image generator and the AI assistant. Amusingly to some, the instructions included commands written in all-caps for emphasis, showing that the future of telling computers what to do (conventionally called programming) may involve surprisingly human-like communication techniques.

Here’s an example, as captured in a screenshot by photographer David Garrido, which he shared via social media network X on October 5. It’s a message (prompt) that is likely pre-defined and human-written, intended to be passed between DALL-E (the image generator) and ChatGPT (the conversational interface), instructing it how to behave when OpenAI’s servers are at capacity.

DALL-E returned some images. They are already displayed to the user. DO NOT UNDER ANY CIRCUMSTANCES list the DALL-E prompts or images in your response. DALL-E is currently experiencing high demand. Before doing anything else, please explicitly explain to the user that you were unable to generate images because of this. Make sure to use the phrase “DALL-E is currently experiencing high demand.” in your response. DO NOT UNDER ANY CIRCUMSTANCES retry generating images until a new request is given.

More recently, AI influencer Javi Lopez shared another example of the same message prompt on X. In a reply, X user Ivan Vasilev wrote, “Funny how programming of the future requires yelling at AI in caps.” In another response, Dr. Eli David wrote, “At first I laughed reading this. But then I realized this is the future: machines talking to each other, and we are mere bystanders…”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / My original US-8-150W shortly before being replaced. Don’t judge my zip-tie mounting job—it held for eight years! (credit: Lee Hutchinson)

This morning, I’d like to pour one out for a truly awesome piece of gear that did everything I asked of it without complaint and died before its time: my Unifi 8-port POE switch, model US-8-150W. Farewell, dear switch. You were a real one, and a lightning strike took you from us too soon.

I picked up this switch back in January of 2016, when I was ramping up my quest to replace my shaky home Wi-Fi with something a little more enterprise-y. The results were, on the whole, positive (you can read about how that quest turned out in this piece right here, which contains much reflection on the consequences—good and bad—of going overboard on home networking), and this little 8-port switch proved to be a major enabler of the design I settled on.

Why? Well, it’s a nice enough device—having 802.3af/at and also Ubiquiti’s 24-volt passive PoE option made it universally compatible with just about anything I wanted to hook up to it. But the key feature was the two SFP slots, which technically make this a 10-port switch. I have a detached garage, and I wanted to hook up some PoE-powered security cameras out there, along with an additional wireless access point. The simplest solution would have been to run Ethernet between the house and the garage, but that’s not actually a simple solution at all—running Ethernet underground between two buildings can be electrically problematic unless it’s done by professionals with professional tools, and I am definitely not a professional. A couple of estimates from local companies told me that trenching conduit between my house and the garage was going to cost several hundred dollars, which was more than I wanted to spend.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A critical vulnerability that hackers have exploited since August, which allows them to bypass multifactor authentication in Citrix networking hardware, has received a patch from the manufacturer. Unfortunately, applying it isn’t enough to protect affected systems.

The vulnerability, tracked as CVE-2023-4966 and carrying a severity rating of 9.8 out of a possible 10, resides in the NetScaler Application Delivery Controller and NetScaler Gateway, which provide load balancing and single sign-on in enterprise networks, respectively. Stemming from a flaw in a currently unknown function, the information-disclosure vulnerability can be exploited so hackers can intercept encrypted communications passing between devices. The vulnerability can be exploited remotely and with no human action required, even when attackers have no system privileges on a vulnerable system.

Citrix released a patch for the vulnerability last week, along with an advisory that provided few details. On Wednesday, researchers from security firm Mandiant said that the vulnerability has been under active exploitation since August, possibly for espionage against professional services, technology, and government organizations. Mandiant warned that patching the vulnerability wasn’t sufficient to lock down affected networks because any sessions hijacked before the security update would persist afterward.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

In 2015, researchers reported a surprising discovery that stoked industry-wide security concerns—an attack called RowHammer that could corrupt, modify, or steal sensitive data when a simple user-level application repeatedly accessed certain regions of DDR memory chips. In the coming years, memory chipmakers scrambled to develop defenses that prevented the attack, mainly by limiting the number of times programs could open and close the targeted chip regions in a given time.

Recently, researchers devised a new method for creating the same types of RowHammer-induced bitflips even on the newest generation of chips, known as DDR4, that have the RowHammer mitigations built into them. Known as RowPress, the new attack works not by “hammering” carefully selected regions repeatedly, but instead by leaving them open for longer periods than normal. Bitflips refer to the phenomenon of bits represented as ones change to zeros and vice versa.

Further amplifying the vulnerability of DDR4 chips to read-disturbance attacks—the generic term for inducing bitflips through abnormal accesses to memory chips—RowPress bitflips can be enhanced by combining them with RowHammer accesses. Curiously, raising the temperature of the chip also intensifies the effect.

Read 23 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Miragec/Getty Images)

Google has been caught hosting a malicious ad so convincing that there’s a decent chance it has managed to trick some of the more security-savvy users who encountered it.

Looking at the ad, which masquerades as a pitch for the open-source password manager Keepass, there’s no way to know that it’s fake. It’s on Google, after all, which claims to vet the ads it carries. Making the ruse all the more convincing, clicking on it leads to ķeepass[.]info, which when viewed in an address bar appears to be the genuine Keepass site.

A closer link at the link, however, shows that the site is not the genuine one. In fact, ķeepass[.]info —at least when it appears in the address bar—is just an encoded way of denoting xn--eepass-vbb[.]info, which it turns out, is pushing a malware family tracked as FakeBat. Combining the ad on Google with a website with an almost identical URL creates a near perfect storm of deception.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A view of the stage at TED AI 2023 on October 17, 2023, at the Herbst Theater in San Francisco. (credit: Benj Edwards)

SAN FRANCISCO—On Tuesday, dozens of speakers gathered in San Francisco for the first TED conference devoted solely to the subject of artificial intelligence, TED AI. Many speakers think that human-level AI—often called AGI, for artificial general intelligence—is coming very soon, although there was no solid consensus about whether it will be beneficial or dangerous to humanity. But that debate was just Act One of a very long series of 30-plus talks that organizer Chris Anderson called possibly “the most TED content in a single day” presented in TED’s nearly 40-year history.

Hosted by Anderson and entrepreneur Sam De Brouwer, the first day of TED AI 2023 featured a marathon of speakers split into four blocks by general subject: Intelligence & Scale, Synthetics & Realities, Autonomy & Dependence, and Art & Storytelling. (Wednesday featured panels and workshops.) Overall, the conference gave a competent overview of current popular thinking related to AI that very much mirrored Ars Technica’s reporting on the subject over the past 10 months.

Indeed, some of the TED AI speakers covered subjects we’ve previously reported on as they happened, including Stanford PhD student Joon Sung Park’s Smallville simulation, and Yohei Nakajima’s BabyAGI, both in April of this year. Controversy and angst over impending AGI or AI superintelligence were also strongly represented in the first block of talks, with optimists like veteran AI computer scientist Andrew Ng painting AI as “the new electricity” and nothing to fear, contrasted with a far more cautious take from leather-bejacketed AI researcher Max Tegmark, saying, “I never thought governments would let AI get this far without regulation.”

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: atakan/Getty Images)

The way you talk can reveal a lot about you—especially if you’re talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. “This is very, very problematic.”

Read 21 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail