Category:

Editor’s Pick

On Friday, Anthropic announced that Amazon has increased its investment in the AI startup by $4 billion, bringing its total stake to $8 billion while maintaining its minority investor position. Anthropic makes Claude, an AI assistant rival to OpenAI’s ChatGPT.

One reason behind the deal involves chips. The computing demands of training large AI models have made access to specialized processors a requirement for AI companies. While Nvidia currently dominates the AI chip market with customers that include most major tech companies, some cloud providers like Amazon have begun developing their own AI-specific processors.

Under the agreement, Anthropic will train and deploy its foundation models using Amazon’s custom-built Trainium (for training AI models) and its Inferentia chips (for AI inference, the term for running trained models). The company will also work with Amazon’s Annapurna Labs division to advance processor development for AI applications.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Federal prosecutors have charged five men with running an extensive phishing scheme that allegedly allowed them to compromise hundreds of companies nationwide, gain non-public information, and steal millions of dollars in cryptocurrency.

The charges, detailed in court documents unsealed Wednesday, pertain to a crime group security researchers have dubbed Scattered Spider. Members were behind a massive breach on MGM last year that cost the casino and resort company $100 million. MGM preemptively shut down large parts of its internal networks after discovering the breach, causing slot machines and keycards for thousands of hotel rooms to stop working and slowing electronic transfers. Scattered Spider also breached the internal network of authentication provider Twilio, which allowed the group to hack or target hundreds of other companies.

Not your father’s phishing campaign

Key to Scattered Spider’s success were phishing attacks so methodical and well-orchestrated they were hard to detect even when sophisticated defenses were implemented. Microsoft researchers, who track the group under the name Octo Tempest, declared it “one of the most dangerous financial criminal groups.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Updating our site reputation abuse policy” is how Google, in almost wondrously opaque fashion, announced yesterday that big changes have come to some big websites, especially those that rely on their domain authority to promote lucrative third-party product recommendations.

If you’ve searched for reviews and seen results that make you ask why so many old-fashioned news sites seem to be “reviewing” products lately—especially products outside that site’s expertise—that’s what Google is targeting.

“This is a tactic where third-party content is published on a host site in an attempt to take advantage of the host’s already-established ranking signals,” Google’s post on its Search Central blog reads. “The goal of this tactic is for the content to rank better than it could otherwise on a different site, and leads to a bad search experience for users.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

The Starlink waitlist is back in certain parts of the US, including several large cities on the West Coast and in Texas. The Starlink availability map says the service is sold out in and around Seattle; Spokane, Washington; Portland, Oregon; San Diego; Sacramento, California; and Austin, Texas. Neighboring cities and towns are included in the sold-out zones.

There are additional sold-out areas in small parts of Colorado, Montana, and North Carolina. As PCMag noted yesterday, the change comes about a year after Starlink added capacity and removed its waitlist throughout the US.

Elsewhere in North America, there are some sold-out areas in Canada and Mexico. Across the Atlantic, Starlink is sold out in London and neighboring cities. Starlink is not yet available in most of Africa, and some of the areas where it is available are sold out.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Last week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobile games, such as Pokémon Go, and from users of its Scaniverse app, reports 404 Media.

All AI models require training data. So far, companies have collected data from websites, YouTube videos, books, audio sources, and more, but this is perhaps the first we’ve heard of AI training data collected through a mobile gaming app.

“Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” Niantic wrote in a company blog post.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Last week, actor and director Ben Affleck shared his views on AI’s role in filmmaking during the 2024 CNBC Delivering Alpha investor summit, arguing that AI models will transform visual effects but won’t replace creative filmmaking anytime soon. A video clip of Affleck’s opinion began circulating widely on social media not long after.

“Didn’t expect Ben Affleck to have the most articulate and realistic explanation where video models and Hollywood is going,” wrote one X user.

In the clip, Affleck spoke of current AI models’ abilities as imitators and conceptual translators—mimics that are typically better at translating one style into another instead of originating deeply creative material.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

In 2017, eight machine-learning researchers at Google released a groundbreaking research paper called Attention Is All You Need, which introduced the Transformer AI architecture that underpins almost all of today’s high-profile generative AI models.

The Transformer has made a key component of the modern AI boom possible by translating (or transforming, if you will) input chunks of data called “tokens” into another desired form of output using a neural network. Variations of the Transformer architecture power language models like GPT-4o (and ChatGPT), audio synthesis models that run Google’s NotebookLM and OpenAI’s Advanced Voice Mode, video synthesis models like Sora, and image synthesis models like Midjourney.

At TED AI 2024 in October, one of those eight researchers, Jakob Uszkoreit, spoke with Ars Technica about the development of transformers, Google’s early work on large language models, and his new venture in biological computing.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

There’s a general consensus that we won’t be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It’s still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that’s betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.

On their own, none of the changes being announced are revolutionary. But collectively, changes across the hardware and software stacks have produced much more efficient and less error-prone operations. The net result is a system that supports the most complicated calculations yet on IBM’s hardware, leaving the company optimistic that its users will find some calculations where quantum hardware provides an advantage.

Better hardware and software

IBM’s early efforts in the quantum computing space saw it ramp up the qubit count rapidly, being one of the first companies to reach the 1,000 qubit count. However, each of those qubits had an error rate that ensured that any algorithms that tried to use all of these qubits in a single calculation would inevitably trigger one. Since then, the company’s focus has been on improving the performance of smaller processors. Wednesday’s announcement was based on the introduction of the second version of its Heron processor, which has 133 qubits. That’s still beyond the capability of simulations on classical computers, should it be able to operate with sufficiently low errors.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Friday, research organization Epoch AI released FrontierMath, a new mathematics benchmark that has been turning heads in the AI world because it contains hundreds of expert-level problems that leading AI models solve less than 2 percent of the time, according to Epoch AI. The benchmark tests AI language models (such as GPT-4o, which powers ChatGPT) against original mathematics problems that typically require hours or days for specialist mathematicians to complete.

FrontierMath’s performance results, revealed in a preprint research paper, paint a stark picture of current AI model limitations. Even with access to Python environments for testing and verification, top models like Claude 3.5 Sonnet, GPT-4o, o1-preview, and Gemini 1.5 Pro scored extremely poorly. This contrasts with their high performance on simpler math benchmarks—many models now score above 90 percent on tests like GSM8K and MATH.

The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination. Many existing AI models are trained on other test problem datasets, allowing the AI models to easily solve the problems and appear more generally capable than they actually are. Many experts cite this as evidence that current large language models (LLMs) are poor generalist learners.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

In the short-term, the most dangerous thing about AI language models may be their ability to emotionally manipulate humans if not carefully conditioned. The world saw its first taste of that danger in February 2023 with the launch of Bing Chat, now called Microsoft Copilot.

During its early testing period, the temperamental chatbot gave the world a preview of an “unhinged” version of OpenAI’s GPT-4 prior to its official release. Sydney’s sometimes uncensored and “emotional” nature (including use of emojis) arguably gave the world its first large-scale encounter with a truly manipulative AI system. The launch set off alarm bells in the AI alignment community and served as fuel for prominent warning letters about AI dangers.

On November 19 at 4 pm Eastern (1 pm Pacific), Ars Technica Senior AI Reporter Benj Edwards will host a livestream conversation on YouTube with independent AI researcher Simon Willison that will explore the impact and fallout of the 2023 fiasco. We’re calling it “Bing Chat: Our First Encounter with Manipulative AI.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail