Category:

Editor’s Pick

On Sunday, OpenAI CEO Sam Altman offered two eye-catching predictions about the near-future of artificial intelligence. In a post titled “Reflections” on his personal blog, Altman wrote, “We are now confident we know how to build AGI as we have traditionally understood it.” He added, “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

Both statements are notable coming from Altman, who has served as the leader of OpenAI during the rise of mainstream generative AI products such as ChatGPT. AI agents are the latest marketing trend in AI, allowing AI models to take action on a user’s behalf. However, critics of the company and Altman immediately took aim at the statements on social media.

We are now confident that we can spin bullshit at unprecedented levels, and get away with it,” wrote frequent OpenAI critic Gary Marcus in response to Altman’s post. “So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

As many of us celebrated the year-end holidays, a small group of researchers worked overtime tracking a startling discovery: At least 33 browser extensions hosted in Google’s Chrome Web Store, some for as long as 18 months, were surreptitiously siphoning sensitive data from roughly 2.6 million devices.

The compromises came to light with the discovery by data loss prevention service Cyberhaven that a Chrome extension used by 400,000 of its customers had been updated with code that stole their sensitive data.

‘Twas the night before Christmas

The malicious extension, available as version 24.10.4, was available for 31 hours, starting on December 25 at 1:32 AM UTC to Dec 26 at 2:50 AM UTC. Chrome browsers actively running the Cyberhaven during that window would automatically download and install the malicious code. Cyberhaven responded by issuing version 24.10.5, and a few days later 24.10.6.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

It’s that time again, when families and friends gather and implore the more technically inclined among them to troubleshoot problems they’re having behind the device screens all around them. One of the most vexing and most common problems is logging into accounts in a way that’s both secure and reliable.

Using the same password everywhere is easy, but in an age of mass data breaches and precision-orchestrated phishing attacks, it’s also highly unadvisable. Then again, creating hundreds of unique passwords, storing them securely, and keeping them out of the hands of phishers and database hackers is hard enough for experts, let alone Uncle Charlie, who got his first smartphone only a few years ago. No wonder this problem never goes away.

Passkeys—the much-talked-about password alternative to passwords that have been widely available for almost two years—was supposed to fix all that. When I wrote about passkeys two years ago, I was a big believer. I remain convinced that passkeys mount the steepest hurdle yet for phishers, SIM swappers, database plunderers, and other adversaries trying to hijack accounts. How and why is that?

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

2024: The year AI drove everyone crazy

by

It’s been a wild year in tech thanks to the intersection between humans and artificial intelligence. 2024 brought a parade of AI oddities, mishaps, and wacky moments that inspired odd behavior from both machines and man. From AI-generated rat genitals to search engines telling people to eat rocks, this year proved that AI has been having a weird impact on the world.

Why the weirdness? If we had to guess, it may be due to the novelty of it all. Generative AI and applications built upon Transformer-based AI models are still so new that people are throwing everything at the wall to see what sticks. People have been struggling to grasp both the implications and potential applications of the new technology. Riding along with the hype, different types of AI that may end up being ill-advised, such as automated military targeting systems, have also been introduced.

It’s worth mentioning that aside from crazy news, we saw fewer weird AI advances in 2024 as well. For example, Claude 3.5 Sonnet launched in June held off the competition as a top model for most of the year, while OpenAI’s o1 used runtime compute to expand GPT-4o’s capabilities with simulated reasoning. Advanced Voice Mode and NotebookLM also emerged as novel applications of AI tech, and the year saw the rise of more capable music synthesis models and also better AI video generators, including several from China.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Health care company Ascension lost sensitive data for nearly 5.6 million individuals in a cyberattack that was attributed to a notorious ransomware gang, according to documents filed with the attorney general of Maine.

Ascension owns 140 hospitals and scores of assisted living facilities. In May, the organization was hit with an attack that caused mass disruptions as staff was forced to move to manual processes that caused errors, delayed or lost lab results, and diversions of ambulances to other hospitals. Ascension managed to restore most services by mid-June. At the time, the company said the attackers had stolen protected health information and personally identifiable information for an undisclosed number of people.

Investigation concluded

A filing Ascension made earlier in December revealed that nearly 5.6 million people were affected by the breach. Data stolen depended on the particular person but included individuals’ names and medical information (e.g., medical record numbers, dates of service, types of lab tests, or procedure codes), payment information (e.g., credit card information or bank account numbers), insurance information (e.g., Medicaid/Medicare ID, policy number, or insurance claim), government
identification (e.g., Social Security numbers, tax identification numbers, driver’s license numbers, or passport numbers), and other personal information (such as date of birth or address).

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Over the past 12 business days, OpenAI has announced a new product or demoed an AI feature every weekday, calling the PR event “12 days of OpenAI.” We’ve covered some of the major announcements, but we thought a look at each announcement might be useful for people seeking a comprehensive look at each day’s developments.

The timing and rapid pace of these announcements—particularly in light of Google’s competing releases—illustrates the intensifying competition in AI development. What might normally have been spread across months was compressed into just 12 business days, giving users and developers a lot to process as they head into 2025.

Humorously, we asked ChatGPT what it thought about the whole series of announcements, and it was skeptical that the event even took place. “The rapid-fire announcements over 12 days seem plausible,” wrote ChatGPT-4o, “But might strain credibility without a clearer explanation of how OpenAI managed such an intense release schedule, especially given the complexity of the features.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Friday, during Day 12 of its “12 days of OpenAI,” OpenAI CEO Sam Altman announced its latest AI “reasoning” models, o3 and o3-mini, which build upon the o1 models launched earlier this year. The company is not releasing them yet but will make these models available for public safety testing and research access today.

The models use what OpenAI calls “private chain of thought,” where the model pauses to examine its internal dialog and plan ahead before responding, which you might call “simulated reasoning” (SR)—a form of AI that goes beyond basic large language models (LLMs).

The company named the model family “o3” instead of “o2” to avoid potential trademark conflicts with British telecom provider O2, according to The Information. During Friday’s livestream, Altman acknowledged his company’s naming foibles, saying, “In the grand tradition of OpenAI being really, truly bad at names, it’ll be called o3.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Over the past month, we’ve seen a rapid cadence of notable AI-related announcements and releases from both Google and OpenAI, and it’s been making the AI community’s head spin. It has also poured fuel on the fire of the OpenAI-Google rivalry, an accelerating game of one-upmanship taking place unusually close to the Christmas holiday.

“How are people surviving with the firehose of AI updates that are coming out,” wrote one user on X last Friday, which is still a hotbed of AI-related conversation. “in the last <24 hours we got gemini flash 2.0 and chatGPT with screenshare, deep research, pika 2, sora, chatGPT projects, anthropic clio, wtf it never ends.”

Rumors travel quickly in the AI world, and people in the AI industry had been expecting OpenAI to ship some major products in December. Once OpenAI announced “12 days of OpenAI” earlier this month, Google jumped into gear and seemingly decided to try to one-up its rival on several counts. So far, the strategy appears to be working, but it’s coming at the cost of the rest of the world being able to absorb the implications of the new releases.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

It’s been a really busy month for Google as it apparently endeavors to outshine OpenAI with a blitz of AI releases. On Thursday, Google dropped its latest party trick: Gemini 2.0 Flash Thinking Experimental, which is a new AI model that uses runtime “reasoning” techniques similar to OpenAI’s o1 to achieve “deeper thinking” on problems fed into it.

The experimental model builds on Google’s newly released Gemini 2.0 Flash and runs on its AI Studio platform, but early tests conducted by TechCrunch reporter Kyle Wiggers reveal accuracy issues with some basic tasks, such as incorrectly counting that the word “strawberry” contains two R’s.

These so-called reasoning models differ from standard AI models by incorporating feedback loops of self-checking mechanisms, similar to techniques we first saw in early 2023 with hobbyist projects like “Baby AGI.” The process requires more computing time, often adding extra seconds or minutes to response times. Companies have turned to reasoning models as traditional scaling methods at training time have been showing diminishing returns.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Another company has publicly cut ties with Broadcom’s VMware. This time, it’s Ingram Micro, one of the world’s biggest IT distributors. The announcement comes as Broadcom eyes services as a key part of maintaining VMware business in 2025. But even as some customers are reducing reliance on VMware, its trillion-dollar owner is laughing all the way to the bank.

IT distributor severs VMware ties

Ingram is reducing its Broadcom-related business to “limited engagement with VMware in select regions,” a spokesperson told The Register this week.

“We were unable to reach an agreement with Broadcom that would help our customers deliver the best technology outcomes now and in the future while providing an appropriate shareholder return,” the spokesperson said.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail