Category:

Editor’s Pick

Enlarge (credit: Benj Edwards / OpenAI)

On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with “all of humanity,” as written in its charter.

A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It’s an approach taken by some of OpenAI’s competitors, such as Anthropic and Elon Musk’s xAI.

In a notable change under the new plan, CEO Sam Altman would receive equity in the for-profit company for the first time. Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman’s previous stance of not taking equity in the company, which he had maintained was in line with OpenAI’s mission to benefit humanity rather than individuals.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

The National Institute of Standards and Technology (NIST), the federal body that sets technology standards for governmental agencies, standards organizations, and private companies, has proposed barring some of the most vexing and nonsensical password requirements. Chief among them: mandatory resets, required or restricted use of certain characters, and the use of security questions.

Choosing strong passwords and storing them safely is one of the most challenging parts of a good cybersecurity regimen. More challenging still is complying with password rules imposed by employers, federal agencies, and providers of online services. Frequently, the rules—ostensibly to enhance security hygiene—actually undermine it. And yet, the nameless rulemakers impose the requirements anyway.

Stop the madness, please!

Last week, NIST SP 800-63-4, the latest version of its Digital Identity Guidelines. At roughly 35,000 words and filled with jargon and bureaucratic terms, the document is nearly impossible to read all the way through and just as hard to understand fully. It sets both the technical requirements and recommended best practices for determining the validity of methods used to authenticate digital identities online. Organizations that interact with the federal government online are required to be in compliance.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023. (credit: PATRICK T. FALLON via Getty Images)

On Wednesday, OpenAI Chief Technical Officer Mira Murati announced she is leaving the company in a surprise resignation shared on the social network X. Murati joined OpenAI in 2018, serving for six-and-a-half years in various leadership roles, most recently as the CTO.

“After much reflection, I have made the difficult decision to leave OpenAI,” she wrote in a letter to the company’s staff. “While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years,” she continued, referring to OpenAI CEO Sam Altman and President Greg Brockman. “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right.”

At OpenAI, Murati was in charge of overseeing the company’s technical strategy and product development, including the launch and improvement of DALL-E, Codex, Sora, and the ChatGPT platform, while also leading research and safety teams. In public appearances, Murati often spoke about ethical considerations in AI development.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Putting the “chat” in ChatGPT (credit: Getty Images)

In May, when OpenAI first demonstrated ChatGPT-4o’s coming audio conversation capabilities, I wrote that it felt like we were “on the verge of something… like a sea change in how we think of and work with large language models.” Now that those “Advanced Voice” features are rolling out widely to ChatGPT subscribers, we decided to ask ChatGPT to explain, in its own voice, how this new method of interaction might impact our collective relationship with large language models.

That chat, which you can listen to and read a transcript of below, shouldn’t be treated as an interview with an official OpenAI spokesperson or anything. Still, it serves as a fun way to offer an initial test of ChatGPT’s live conversational chops.

Our first quick chat with the ChatGPT-4o’s new “Advanced Voice” features.
Our first quick chat with the ChatGPT-4o’s new “Advanced Voice” features.

Even in this short introductory “chat,” we were impressed by the natural, dare-we-say human cadence and delivery of ChatGPT’s “savvy and relaxed” Sol voice (which reminds us a bit of ’90s Janeane Garofalo). Between ChatGPT’s ability to give quick responses—offered in in milliseconds rather than seconds—and convincing intonation, it’s incredibly easy to fool yourself into thinking you’re speaking to a conscious being rather than what is, as ChatGPT says here, “still just a computer program processing information, without real emotions or consciousness.”

Read 2 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Strolling down memory lane

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September. Memory with ChatGPT stores information from previous conversations and uses it as context in all future conversations. That way, the LLM can be aware of details such as a user’s age, gender, philosophical beliefs, and pretty much anything else, so those details don’t have to be inputted during each conversation.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Filmmaker James Cameron. (credit: James Cameron / Stability AI)

On Tuesday, Stability AI announced that renowned filmmaker James Cameron—of Terminator and Skynet fame—has joined its board of directors. Stability is best known for its pioneering but highly controversial Stable Diffusion series of AI image-synthesis models, first launched in 2022, which can generate images based on text descriptions.

“I’ve spent my career seeking out emerging technologies that push the very boundaries of what’s possible, all in the service of telling incredible stories,” said Cameron in a statement. “I was at the forefront of CGI over three decades ago, and I’ve stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave.”

Cameron is perhaps best known as the director behind blockbusters like Avatar, Titanic, and Aliens, but in AI circles, he may be most relevant for the co-creation of the character Skynet, a fictional AI system that triggers nuclear Armageddon and dominates humanity in the Terminator media franchise. Similar fears of AI taking over the world have since jumped into reality and recently sparked attempts to regulate existential risk from AI systems through measures like SB-1047 in California.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

Broadcom is accusing AT&T of trying to “rewind the clock and force” Broadcom “to sell support services for perpetual software licenses … that VMware has discontinued from its product line and to which AT&T has no contractual right to purchase.” The statement comes from legal documents Broadcom filed in response to AT&T’s lawsuit against Broadcom for refusing to renew support for its VMware perpetual licenses [PDF].

On August 29, AT&T filed a lawsuit [PDF] against Broadcom, alleging that Broadcom is breaking a contract by refusing to provide a one-year renewal for support for perpetually licensed VMware software. Broadcom famously ended perpetual VMware license sales shortly after closing its acquisition in favor of a subscription model featuring about two bundles of products rather than many SKUs.

AT&T claims its VMware contract (forged before Broadcom’s acquisition closed in November) entitles it to three one-year renewals of perpetual license support, and it’s currently trying to enact the second one. AT&T says it uses VMware products to run 75,000 virtual machines (VMs) across about 8,600 servers. The VMs are for supporting customer services operations and operations management efficiency, per AT&T. AT&T is asking the Supreme Court of the State of New York to stop Broadcom from ending VMware support services for AT&T and “further relief” as deemed necessary.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: andresr via Getty Images)

On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled “The Intelligence Age.” The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.

“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there,” he wrote.

OpenAI’s current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. In contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Five years ago, researchers made a grim discovery—a legitimate Android app in the Google Play market that was surreptitiously made malicious by a library the developers used to earn advertising revenue. With that, the app was infected with code that caused 100 million infected devices to connect to attacker-controlled servers and download secret payloads.

Now, history is repeating itself. Researchers from the same Moscow, Russia-based security firm reported Monday that they found two new apps, downloaded from Play 11 million times, that were infected with the same malware family. The researchers, from Kaspersky, believe a malicious software developer kit for integrating advertising capabilities is once again responsible.

Clever tradecraft

Software developer kits, better known as SDKs, are apps that provide developers with frameworks that can greatly speed up the app-creation process by streamlining repetitive tasks. An unverified SDK module incorporated into the apps ostensibly supported the display of ads. Behind the scenes, it provided a host of advanced methods for stealthy communication with malicious servers, where the apps would upload user data and download malicious code that could be executed and updated at any time.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Juj Winn)

A pleasant female voice greets me over the phone. “Hi, I’m an assistant named Jasmine for Bodega,” the voice says. “How can I help?”

“Do you have patio seating,” I ask. Jasmine sounds a little sad as she tells me that unfortunately, the San Francisco–based Vietnamese restaurant doesn’t have outdoor seating. But her sadness isn’t the result of her having a bad day. Rather, her tone is a feature, a setting.

Jasmine is a member of a new, growing clan: the AI voice restaurant host. If you recently called up a restaurant in New York City, Miami, Atlanta, or San Francisco, chances are you have spoken to one of Jasmine’s polite, calculated competitors.  

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail