Category:

Editor’s Pick

Enlarge (credit: Getty Images)

Researchers have unearthed never-before-seen wiper malware tied to the Kremlin and an operation two years ago that took out more than 10,000 satellite modems located mainly in Ukraine on the eve of Russia’s invasion of its neighboring country.

AcidPour, as researchers from security firm Sentinel One have named the new malware, has stark similarities to AcidRain, a wiper discovered in March 2022 that Viasat has confirmed was used in the attack on its modems earlier that month. Wipers are malicious applications designed to destroy stored data or render devices inoperable. Viasat said AcidRain was installed on more than 10,000 Eutelsat KA-SAT modems used by the broadband provider seven days prior to the March 2022 discovery of the wiper. AcidRain was installed on the devices after attackers gained access to the company’s private network.

Sentinel One, which also discovered AcidRain, said at the time that the earlier wiper had enough technical overlaps with malware the US government attributed to the Russian government in 2018 to make it likely that AcidRain and the 2018 malware, known as VPNFilter, were closely linked to the same team of developers. In turn, Sentinel One’s report Thursday noting the similarities between AcidRain and AcidPour, provides evidence that AcidPour was also created by developers working on behalf of the Kremlin.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The United Nations building in New York. (credit: Getty Images)

On Thursday, the United Nations General Assembly unanimously consented to adopt what some call the first global resolution on AI, reports Reuters. The resolution aims to foster the protection of personal data, enhance privacy policies, ensure close monitoring of AI for potential risks, and uphold human rights. It emerged from a proposal by the United States and received backing from China and 121 other countries.

Being a nonbinding agreement and thus effectively toothless, the resolution seems broadly popular in the AI industry. On X, Microsoft Vice Chair and President Brad Smith wrote, “We fully support the @UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

The resolution, titled “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development,” resulted from three months of negotiation, and the stakeholders involved seem pleased at the level of international cooperation. “We’re sailing in choppy waters with the fast-changing technology, which means that it’s more important than ever to steer by the light of our values,” one senior US administration official told Reuters, highlighting the significance of this “first-ever truly global consensus document on AI.”

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo of Vernor Vinge in 2006. (credit: Raul654)

On Wednesday, author David Brin announced that Vernor Vinge, sci-fi author, former professor, and father of the technological singularity concept, died from Parkinson’s disease at age 79 on March 20, 2024, in La Jolla, California. The announcement came in a Facebook tribute where Brin wrote about Vinge’s deep love for science and writing.

“A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters, and the implications of science,” wrote Brin in his post.

As a sci-fi author, Vinge won Hugo Awards for his novels A Fire Upon the Deep (1993), A Deepness in the Sky (2000), and Rainbows End (2007). He also won Hugos for novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004). As Mike Glyer’s File 770 blog notes, Vinge’s novella True Names (1981) is frequency cited as the first presentation of an in-depth look at the concept of “cyberspace.”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Apple’s M1 chip. (credit: Photographer: Daniel Acker/Bloomberg via Getty Images)

A newly discovered vulnerability baked into Apple’s M-series of chips allows attackers to extract secret keys from Macs when they perform widely used cryptographic operations, academic researchers have revealed in a paper published Thursday.

The flaw—a side channel allowing end-to-end key extractions when Apple chips run implementations of widely used cryptographic protocols—can’t be patched directly because it stems from the microarchitectural design of the silicon itself. Instead, it can only be mitigated by building defenses into third-party cryptographic software that could drastically degrade M-series performance when executing cryptographic operations, particularly on the earlier M1 and M2 generations. The vulnerability can be exploited when the targeted cryptographic operation and the malicious application with normal user system privileges run on the same CPU cluster.

Beware of hardware optimizations

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

Read 19 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

When OpenAI launched its GPT-4 AI model a year ago, it created a wave of immense hype and existential panic from its ability to imitate human communication and composition. Since then, the biggest question in AI has remained the same: When is GPT-5 coming out? During interviews and media appearances around the world, OpenAI CEO Sam Altman frequently gets asked this question, and he usually gives a coy or evasive answer, sometimes coupled with promises of amazing things to come.

According to a new report from Business Insider, OpenAI is expected to release GPT-5, an improved version of the AI language model that powers ChatGPT, sometime in mid-2024—and likely during the summer. Two anonymous sources familiar with the company have revealed that some enterprise customers have recently received demos of GPT-5 and related enhancements to ChatGPT.

One CEO who recently saw a version of GPT-5 described it as “really good” and “materially better,” with OpenAI demonstrating the new model using use cases and data unique to his company. The CEO also hinted at other unreleased capabilities of the model, such as the ability to launch AI agents being developed by OpenAI to perform tasks automatically.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An illustration of a humanoid robot created by Nvidia. (credit: Nvidia)

In sci-fi films, the rise of humanlike artificial intelligence often comes hand in hand with a physical platform, such as an android or robot. While the most advanced AI language models so far seem mostly like disembodied voices echoing from an anonymous data center, they might not remain that way for long. Some companies like Google, Figure, Microsoft, Tesla, Boston Dynamics, and others are working toward giving AI models a body. This is called “embodiment,” and AI chipmaker Nvidia wants to accelerate the process.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Nvidia CEO Jensen Huang in a statement. Huang spent a portion of Nvidia’s annual GTC conference keynote on Monday going over Nvidia’s robotics efforts. “The next generation of robotics will likely be humanoid robotics,” Huang said. “We now have the necessary technology to imagine generalized human robotics.”

To that end, Nvidia announced Project GR00T, a general-purpose foundation model for humanoid robots. As a type of AI model itself, Nvidia hopes GR00T (which stands for “Generalist Robot 00 Technology” but sounds a lot like the famous Marvel character) will serve as an AI mind for robots, enabling them to learn skills and solve various tasks on the fly. In a tweet, Nvidia researcher Linxi “Jim” Fan called the project “our moonshot to solve embodied AGI in the physical world.”

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A pit stop during the Bahrain Formula One Grand Prix in early March evokes how the team’s manager was feeling when looking at the Excel sheet that managed the car’s build components. (credit: ALI HAIDER/POOL/AFP via Getty Images)

There’s a new boss at a storied 47-year-old Formula 1 team, and he’s eager to shake things up. He’s been saying that the team is far behind its competition in technology and coordination. And Excel is a big part of it.

Starting in early 2023, Williams team principal James Vowles and chief technical officer Pat Fry started reworking the F1 team’s systems for designing and building its car. It would be painful, but the pain would keep the team from falling even further behind. As they started figuring out new processes and systems, they encountered what they considered a core issue: Microsoft Excel.

The Williams car build workbook, with roughly 20,000 individual parts, was “a joke,” Vowles recently told The Race. “Impossible to navigate and impossible to update.” This colossal Excel file lacked information on how much each of those parts cost and the time it took to produce them, along with whether the parts were already on order. Prioritizing one car section over another, from manufacture through inspection, was impossible, Vowles suggested.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Aerial view of a sewage treatment plant. (credit: Getty Images)

The Biden administration on Tuesday warned the nation’s governors that drinking water and wastewater utilities in their states are facing “disabling cyberattacks” by hostile foreign nations that are targeting mission-critical plant operations.

“Disabling cyberattacks are striking water and wastewater systems throughout the United States,” Jake Sullivan, assistant to the President for National Security Affairs, and Michael S. Regan, administrator of the Environmental Protection Agency, wrote in a letter. “These attacks have the potential to disrupt the critical lifeline of clean and safe drinking water, as well as impose significant costs on affected communities.”

The letter cited two recent hacking threats water utilities have faced from groups backed by hostile foreign countries. One incident occurred when hackers backed by the government of Iran disabled operations gear used in water facilities that still used a publicly known default administrator password. The letter didn’t name the facility by name, but details included in a linked advisory tied the hack to one that struck the Municipal Water Authority of Aliquippa in western Pennsylvania last November. In that case, the hackers compromised a programmable logic controller made by Unitronics and made the device screen display an anti-Israeli message. Utility officials responded by temporarily shutting down a pump that provided drinking water to local townships.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The GB200 “superchip” covered with a fanciful blue explosion. (credit: Nvidia / Benj Edwards)

On Monday, Nvidia unveiled the Blackwell B200 tensor core chip—the company’s most powerful single-chip GPU, with 208 billion transistors—which Nvidia claims can reduce AI inference operating costs (such as running ChatGPT) and energy consumption by up to 25 times compared to the H100. The company also unveiled the GB200, a “superchip” that combines two B200 chips and a Grace CPU for even more performance.

The news came as part of Nvidia’s annual GTC conference, which is taking place this week at the San Jose Convention Center. Nvidia CEO Jensen Huang delivered the keynote Monday afternoon. “We need bigger GPUs,” Huang said during his keynote. The Blackwell platform will allow the training of trillion-parameter AI models that will make today’s generative AI models look rudimentary in comparison, he said. For reference, OpenAI’s GPT-3, launched in 2020, included 175 billion parameters. Parameter count is a rough indicator of AI model complexity.

Nvidia named the Blackwell architecture after David Harold Blackwell, a mathematician who specialized in game theory and statistics and was the first Black scholar inducted into the National Academy of Sciences. The platform introduces six technologies for accelerated computing, including a second-generation Transformer Engine, fifth-generation NVLink, RAS Engine, secure AI capabilities, and a decompression engine for accelerated database queries.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards)

On Monday, Bloomberg reported that Apple is in talks to license Google’s Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI.

The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple’s smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple’s annual Worldwide Developers Conference in June.

Gemini could also bring new capabilities to Apple’s widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple’s own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI’s ChatGPT API to power Siri.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail