Category:

Editor’s Pick

Enlarge / The Google Gemini logo. (credit: Google)

On Wednesday, Google announced Gemini, a multimodal AI model family it hopes will rival OpenAI’s GPT-4, which powers the paid version of ChatGPT. Google claims that the largest version of Gemini exceeds “current state-of-the-art results on 30 of the 32 widely used academic benchmarks used in large language model (LLM) research and development.” It’s a follow-up to PaLM 2, an earlier AI model that Google hoped would match GPT-4 in capability.

A specially tuned English version of its mid-level Gemini model is available now in over 170 countries as part of the Google Bard chatbot—although not in the EU or the UK due to potential regulation issues.

Like GPT-4, Gemini can handle multiple types (or “modes”) of input, making it multimodal. That means it can process text, code, images, and even audio. The goal is to make a type of artificial intelligence that can accurately solve problems, give advice, and answer questions in a variety of fields—from the mundane to the scientific. Google says this will power a new era in computing, and it hopes to tightly integrate the technology into its products.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hundreds of Windows and Linux computer models from virtually all hardware makers are vulnerable to a new attack that executes malicious firmware early in the boot-up sequence, a feat that allows infections that are nearly impossible to detect or remove using current defense mechanisms.

The attack—dubbed LogoFAIL by the researchers who devised it—is notable for the relative ease in carrying it out, the breadth of both consumer- and enterprise-grade models that are susceptible, and the high level of control it gains over them. In many cases, LogoFAIL can be remotely executed in post-exploit situations using techniques that can’t be spotted by traditional endpoint security products. And because exploits run during the earliest stages of the boot process, they are able to bypass a host of defenses, including the industry-wide Secure Boot, Intel’s Secure Boot, and similar protections from other companies that are devised to prevent so-called bootkit infections.

Game over for platform security

LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year’s worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware.

Read 26 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Sam Altman, president of Y Combinator and co-chairman of OpenAI, seen here in July 2016. (credit: Drew Angerer / Getty Images News)

When Sam Altman was suddenly removed as CEO of OpenAI—before being reinstated days later—the company’s board publicly justified the move by saying Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” In the days since, there has been some reporting on potential reasons for the attempted board coup, but not much in the way of follow-up on what specific information Altman was allegedly less than “candid” about.

Now, in an in-depth piece for The New Yorker, writer Charles Duhigg—who was embedded inside OpenAI for months on a separate story—suggests that some board members found Altman “manipulative and conniving” and took particular issue with the way Altman allegedly tried to manipulate the board into firing fellow board member Helen Toner.

Board “manipulation” or “ham-fisted” maneuvering?

Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman’s negative attention by co-writing a paper on different ways AI companies can “signal” their commitment to safety through “costly” words and actions. In the paper, Toner contrasts OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Benj Edwards)

In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.

In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons. Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level:

“Spying and surveillance are different but related things,” Schneier writes. “If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.”

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Benj Edwards)

On Tuesday, IBM and Meta announced the AI Alliance, an international coalition of over 50 organizations including AMD, Intel, NASA, CERN, and Harvard University that aims to advance “open innovation and open science in AI.” In other words, the goal is to collectively promote alternatives to closed AI systems currently in use by market leaders such as OpenAI and Google with ChatGPT and Duet.

In the AI Alliance news release, OpenAI isn’t mentioned by name—and OpenAI is not part of the alliance, nor is Google. But over the past year, clear battle lines have been drawn between companies like OpenAI that keep AI model weights (neural network files) and data about how the models are created to themselves and companies like Meta, which provide AI model weights for others to run on their own hardware and allow others to build derivative models based on their research.

“Open and transparent innovation is essential to empower a broad spectrum of AI researchers, builders, and adopters with the information and tools needed to harness these advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all,” writes the alliance.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An artist’s impression of a human and a robot talking. (credit: Getty Images | Benj Edwards)

In a preprint research paper titled “Does GPT-4 Pass the Turing Test?”, two researchers from UC San Diego pitted OpenAI’s GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions—and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT.

Even with limitations and caveats, which we’ll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

British mathematician and computer scientist Alan Turing first conceived the Turing test as “The Imitation Game” in 1950. Since then, it has become a famous but controversial benchmark for determining a machine’s ability to imitate human conversation. In modern versions of the test, a human judge typically talks to either another human or a chatbot without knowing which is which. If the judge cannot reliably tell the chatbot from the human a certain percentage of the time, the chatbot is said to have passed the test. The threshold for passing the test is subjective, so there has never been a broad consensus on what would constitute a passing success rate.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: VMWare)

Broadcom announced back in May of 2022 that it would buy VMware for $61 billion and take on an additional $8 billion of the company’s debt, and on November 22 of 2023 Broadcom said that it had completed the acquisition. And it looks like Broadcom’s first big move is going to be layoffs: according to WARN notices filed with multiple states (catalogued here by Channel Futures), Broadcom will be laying off at least 2,837 employees across multiple states, including 1,267 at its Palo Alto campus in California.

As Channel Futures notes, the actual number of layoffs could be higher, since not all layoffs require WARN notices. We’ve contacted Broadcom for more information about the total number of layoffs and the kinds of positions that are being affected and will update if we receive a response. VMware has around 38,300 employees worldwide.

The WARN notices list the reason for the layoffs as “economic,” but provide no further explanation or justification.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An artist’s interpretation of what ChatGPT might look like if embodied in the form of a robot toy blowing out a birthday candle. (credit: Aurich Lawson | Getty Images)

One year ago today, on November 30, 2022, OpenAI released ChatGPT. It’s uncommon for a single tech product to create as much global impact as ChatGPT in just one year.

Imagine a computer that can talk to you. Nothing new, right? Those have been around since the 1960s. But ChatGPT, the application that first bought large language models (LLMs) to a wide audience, felt different. It could compose poetry, seemingly understand the context of your questions and your conversation, and help you solve problems. Within a few months, it became the fastest-growing consumer application of all time. And it created a frenzy.

During these 365 days, ChatGPT has broadened the public perception of AI, captured imaginations, attracted critics, and stoked existential angst. It emboldened and reoriented Microsoft, made Google dance, spurred fears of AGI taking over the world, captivated world leaders, prompted attempts at government regulation, helped add words to dictionaries, inspired conferences and copycats, led to a crisis for educators, hyper-charged automated defamation, embarrassed lawyers by hallucinating, prompted lawsuits over training data, and much more.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: OpenAI / Benj Edwards)

On Wednesday, OpenAI announced that Sam Altman has officially returned to the ChatGPT-maker as CEO—accompanied by Mira Murati as CTO and Greg Brockman as president—resuming their roles from before the shocking firing of Altman that threw the company into turmoil two weeks ago. Altman says the company did not lose a single employee or customer throughout the crisis.

“I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry,” wrote Altman in an official OpenAI news release. “I feel so, so good about our probability of success for achieving our mission.”

In the statement, Altman formalized plans that have been underway since last week: ex-Salesforce co-CEO Bret Taylor and economist Larry Summers have officially begun their tenure on the “new initial” OpenAI board of directors. Quora CEO Adam D’Angelo is keeping his previous seat on the board. Also on Wednesday, previous board members Tasha McCauley and Helen Toner officially resigned. In addition, a representative from Microsoft (a key OpenAI investor) will have a non-voting observer role on the board of directors.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: FT)

In late 2020, Huawei was fighting for its survival as a mobile phone maker.

A few months earlier, the Trump administration had hit the Chinese company with crippling sanctions, cutting it off from global semiconductor supply chains.

The sanctions prevented anyone without a permit from making the chips Huawei designed, and the company was struggling to procure new chips to launch more advanced handsets.

Read 65 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail