Category:

Editor’s Pick

Enlarge (credit: Rob Engelaar | Getty Images)

Law enforcement agencies including the FBI and the UK’s National Crime Agency have dealt a crippling blow to LockBit, one of the world’s most prolific cybercrime gangs, whose victims include Royal Mail and Boeing.

The 11 international agencies behind “Operation Cronos” said on Tuesday that the ransomware group—many of whose members are based in Russia—had been “locked out” of its own systems. Several of the group’s key members have been arrested, indicted, or identified and its core technology seized, including hacking tools and its “dark web” homepage.

Graeme Biggar, NCA director-general, said law enforcement officers had “successfully infiltrated and fundamentally disrupted LockBit.”

Read 16 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site’s content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo of Galactic Compass running on an iPhone. (credit: Matt Webb / Getty Images)

On Thursday, designer Matt Webb unveiled a new iPhone app called Galactic Compass, which always points to the center of the Milky Way galaxy—no matter where Earth is positioned on our journey through the stars. The app is free and available now on the App Store.

While using Galactic Compass, you set your iPhone on a level surface, and a big green arrow on the screen points the way to the Galactic Center, which is the rotational core of the spiral galaxy all of us live in. In that center is a supermassive black hole known as Sagittarius A*, a celestial body from which no matter or light can escape. (So, in a way, the app is telling us what we should avoid.)

But truthfully, the location of the galactic core at any given time isn’t exactly useful, practical knowledge—at least for people who aren’t James Tiberius Kirk in Star Trek V. But it may inspire a sense of awe about our place in the cosmos.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Snapshots from three videos generated using OpenAI’s Sora.

On Thursday, OpenAI announced Sora, a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it’s only a research preview that we have not tested, it reportedly creates synthetic video (but not audio yet) at a fidelity and consistency greater than any text-to-video model available at the moment. It’s also freaking people out.

“It was nice knowing you all. Please tell your grandchildren about my videos and the lengths we went to to actually record them,” wrote Wall Street Journal tech reporter Joanna Stern on X.

“This could be the ‘holy shit’ moment of AI,” wrote Tom Warren of The Verge.

Read 23 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

More than 1,000 Ubiquiti routers in homes and small businesses were infected with malware used by Russian-backed agents to coordinate them into a botnet for crime and spy operations, according to the Justice Department.

That malware, which worked as a botnet for the Russian hacking group Fancy Bear, was removed in January 2024 under a secret court order as part of “Operation Dying Ember,” according to the FBI’s director. It affected routers running Ubiquiti’s EdgeOS, but only those that had not changed their default administrative password. Access to the routers allowed the hacking group to “conceal and otherwise enable a variety of crimes,” the DOJ claims, including spearphishing and credential harvesting in the US and abroad.

Unlike previous attacks by Fancy Bear—that the DOJ ties to GRU Military Unit 26165, which is also known as APT 28, Sofacy Group, and Sednit, among other monikers—the Ubiquiti intrusion relied on a known malware, Moobot. Once infected by “Non-GRU cybercriminals,” GRU agents installed “bespoke scripts and files” to connect and repurpose the devices, according to the DOJ.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / All shall tremble before your fully functional forward and reverse lookups! (credit: Aurich Lawson | Getty Images)

Here’s a short summary of the next 7,000-ish words for folks who hate the thing recipe sites do where the authors babble about their personal lives for pages and pages before getting to the cooking: This article is about how to install bind and dhcpd and tie them together into a functional dynamic DNS setup for your LAN so that DHCP clients self-register with DNS, and you always have working forward and reverse DNS lookups. This article is intended to be part one of a two-part series, and in part two, we’ll combine our bind DNS instance with an ACME-enabled LAN certificate authority and set up LetsEncrypt-style auto-renewing certificates for LAN services.

If that sounds like a fun couple of weekend projects, you’re in the right place! If you want to fast-forward to where we start installing stuff, skip down a couple of subheds to the tutorial-y bits. Now, excuse me while I babble about my personal life.

My name is Lee, and I have a problem

(Hi, Lee.)

Read 127 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

Broadcom has made a lot of changes to VMware since closing its acquisition of the company in November. On Wednesday, VMware admitted that these changes are worrying customers. With customers mulling alternatives and partners complaining, VMware is trying to do damage control and convince people that change is good.

Not surprisingly, the plea comes from a VMware marketing executive: Prashanth Shenoy, VP of product and technical marketing for the Cloud, Infrastructure, Platforms, and Solutions group at VMware. In Wednesday’s announcementShenoy admitted that VMware “has been all about change” since being swooped up for $61 billion. This has resulted in “many questions and concerns” as customers “evaluate how to maximize value from” VMware products.

Among these changes is VMware ending perpetual license sales in favor of a subscription-based business model. VMware had a history of relying on perpetual licensing; VMware called the model its “most renowned” a year ago.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Gemini 1.5 logo, released by Google. (credit: Google)

One week after its last major AI announcement, Google appears to have upstaged itself. Last Thursday, Google launched Gemini Ultra 1.0, which supposedly represented the best AI language model Google could muster—available as part of the renamed “Gemini” AI assistant (formerly Bard). Today, Google announced Gemini Pro 1.5, which it says “achieves comparable quality to 1.0 Ultra, while using less compute.”

Congratulations, Google, you’ve done it. You’ve undercut your own premiere AI product. While Ultra 1.0 is possibly still better than Pro 1.5 (what even are we saying here), Ultra was presented as a key selling point of its “Gemini Advanced” tier of its Google One subscription service. And now it’s looking a lot less advanced than seven days ago. All this is on top of the confusing name-shuffling Google has been doing recently. (Just to be clear—although it’s not really clarifying at all—the free version of Bard/Gemini currently uses the Pro 1.0 model. Got it?)

Google claims that Gemini 1.5 represents a new generation of LLMs that “delivers a breakthrough in long-context understanding,” and that it can process up to 1 million tokens, “achieving the longest context window of any large-scale foundation model yet.” Tokens are fragments of a word. The first part of the claim about “understanding” is contentious and subjective, but the second part is probably correct. OpenAI’s GPT-4 Turbo can reportedly handle 128,000 tokens in some circumstances, and 1 million is quite a bit more—about 700,000 words. A larger context window allows for processing longer documents and having longer conversations. (The Gemini 1.0 model family handles 32,000 tokens max.)

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

A core developer of Nginx, currently the world’s most popular web server, has quit the project, stating that he no longer sees it as “a free and open source project… for the public good.” His fork, freenginx, is “going to be run by developers, and not corporate entities,” writes Maxim Dounin, and will be “free from arbitrary corporate actions.”

Dounin is one of the earliest and still most active coders on the open source Nginx project and one of the first employees of Nginx, Inc., a company created in 2011 to commercially support the steadily growing web server. Nginx is now used on roughly one-third of the world’s web servers, ahead of Apache.

A tricky history of creation and ownership

Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019. Later that year, two of Nginx’s leaders, Maxim Konovalov and Igor Sysoev, were detained and interrogated in their homes by armed Russian state agents. Sysoev’s former employer, Internet firm Rambler, claimed that it owned the rights to Nginx’s source code, as it was developed during Sysoev’s tenure at Rambler (where Dounin also worked). While the criminal charges and rights do not appear to have materialized, the implications of a Russian company’s intrusion into a popular open source piece of the web’s infrastructure caused some alarm.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Nvidia)

On Tuesday, Nvidia released Chat With RTX, a free personalized AI chatbot similar to ChatGPT that can run locally on a PC with an Nvidia RTX graphics card. It uses Mistral or Llama open-weights LLMs and can search through local files and answer questions about them.

Chat With RTX works on Windows PCs equipped with NVIDIA GeForce RTX 30 or 40 Series GPUs with at least 8GB of VRAM. It uses a combination of retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software, and RTX acceleration to enable generative AI capabilities directly on users’ devices. This setup allows for conversations with the AI model using local files as a dataset.

“Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers,” writes Nvidia in a promotional blog post.

Using Chat With RTX, users can talk about various subjects or ask the AI model to summarize or analyze data, similar to how one might interact with ChatGPT. In particular, the Mistal-7B model has built-in conditioning to avoid certain sensitive topics (like sex and violence, of course), but users could presumably somehow plug in an uncensored AI model and discuss forbidden topics without the paternalism inherent in the censored models.

Also, the application supports a variety of file formats, including .TXT, .PDF, .DOCX, and .XML. Users can direct the tool to browse specific folders, which Chat With RTX then scans to answer queries quickly. It even allows for the incorporation of information from YouTube videos and playlists, offering a way to include external content in its database of knowledge (in the form of embeddings) without requiring an Internet connection to process queries.

Rough around the edges

We downloaded and ran Chat With RTX to test it out. The download file is huge, at around 35 gigabytes, owing to the Mistral and Llama LLM weights files being included in the distribution. (“Weights” are the actual neural network files containing the values that represent data learned during the AI training process.) When installing, Chat With RTX downloads even more files, and it executes in a console window using Python with an interface that pops up in a web browser window.

Several times during our tests on an RTX 3060 with 12GB of VRAM, Chat With RTX crashed. Like open source LLM interfaces, Chat With RTX is a mess of layered dependencies, relying on Python, CUDA, TensorRT, and others. Nvidia hasn’t cracked the code for making the installation sleek and non-brittle. It’s a rough-around-the-edges solution that feels very much like an Nvidia skin over other local LLM interfaces (such as GPT4ALL). Even so, it’s notable that this capability is officially coming directly from Nvidia.

On the bright side (a massive bright side), local processing capability emphasizes user privacy, as sensitive data does not need to be transmitted to cloud-based services (such as with ChatGPT). Using Mistral 7B feels slightly less capable than ChatGPT-3.5 (the free version of ChatGPT), which is still remarkable for a local LLM running on a consumer GPU. It’s not a true ChatGPT replacement yet, and it can’t touch GPT-4 Turbo or Google Gemini Pro/Ultra in processing capability.

Nvidia GPU owners can download Chat With RTX for free on the Nvidia website.

Read on Ars Technica | Comments

0 comment
0 FacebookTwitterPinterestEmail