Category:

Editor’s Pick

Enlarge (credit: Getty Images)

It seems like AI large language models (LLMs) are everywhere these days due to the rise of ChatGPT. Now, a software developer named Ishan Anand has managed to cram a precursor to ChatGPT called GPT-2—originally released in 2019 after some trepidation from OpenAI—into a working Microsoft Excel spreadsheet. It’s freely available and is designed to educate people about how LLMs work.

“By using a spreadsheet anyone (even non-developers) can explore and play directly with how a ‘real’ transformer works under the hood with minimal abstractions to get in the way,” writes Anand on the official website for the sheet, which he calls “Spreadsheets-are-all-you-need.” It’s a nod to the 2017 research paper “Attention is All You Need” that first described the Transformer architecture that has been foundational to how LLMs work.

Anand packed GPT-2 into an XLSB Microsoft Excel binary file format, and it requires the latest version of Excel to run (but won’t work on the web version). It’s completely local and doesn’t do any API calls to cloud AI services.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Justin Sullivan )

Broadcom CEO and President Hock Tan has acknowledged the discomfort VMware customers and partners have experienced after the sweeping changes that Broadcom has instituted since it acquired the virtualization company 114 days ago.

In a blog post Thursday, Tan noted that Broadcom spent 18 months evaluating and buying VMware. He said that while there’s still a lot of work to do, the company has made “substantial progress.”

That so-called progress, though, has worried some of Broadcom’s customers and partners.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson | Getty Images)

Previously, on “Weekend Projects for Homelab Admins With Control Issues,” we created our own dynamically updating DNS and DHCP setup with bind and dhcpd. We laughed. We cried. We hurled. Bonds were forged, never to be broken. And I hope we all took a little something special away from the journey—namely, a dynamically updating DNS and DHCP setup. Which we’re now going to put to use!

If you’re joining us fresh, without having gone through the previous part and wanting to follow this tutorial, howdy! There might be some parts that are more difficult to complete without a local instance of bind (or other authoritative resolver compatible with nsupdate). We’ll talk more about this when we get there, but just know that if you want to pause and go do part one first, you may have an easier time following along.

The quick version: A LetsEncrypt of our own

This article will walk through the process of installing step-ca, a standalone certificate authority-in-a-box. We’ll then configure step-ca with an ACME provisioner—that’s Automatic Certificate Management Environment, the technology that underpins LetsEncrypt and facilitates the automatic provisioning, renewal, and revocation of SSL/TLS certificates.

Read 118 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Charles O’Rear)

A dual Canadian-Russian national has been sentenced to four years in prison for his role in infecting more than 1,000 victims with the LockBit ransomware and then extorting them for tens of millions of dollars.

Mikhail Vasiliev, a 33-year-old who most recently lived in Ontario, Canada, was arrested in November 2022 and charged with conspiring to infect protected computers with ransomware and sending ransom demands to victims. Last month, he pleaded guilty to eight counts of cyber extortion, mischief, and weapons charges.

During an October 2022 raid on Vasiliev’s Bradford, Ontario home, Canadian law enforcement agents found Vasiliev working on a laptop that displayed a login screen to the LockBit control panel, which members used to carry out attacks. The investigators also found a seed phrase credential for a bitcoin wallet address that was linked to a different wallet that had received a payment from a victim that had been infected and extorted by LockBit.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson | Getty Images)

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Token privacy

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Read 36 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Researchers have unearthed Linux malware that circulated in the wild for at least two years before being identified as a credential stealer that’s installed by the exploitation of recently patched vulnerabilities.

The newly identified malware is a Linux variant of NerbianRAT, a remote access Trojan first described in 2022 by researchers at security firm Proofpoint. Last Friday, Checkpoint Research revealed that the Linux version has existed since at least the same year, when it was uploaded to the VirusTotal malware identification site. Checkpoint went on to conclude that Magnet Goblin—the name the security firm uses to track the financially motivated threat actor using the malware—has installed it by exploiting “1-days,” which are recently patched vulnerabilities. Attackers in this scenario reverse engineer security updates, or copy associated proof-of-concept exploits, for use against devices that have yet to install the patches.

Checkpoint also identified MiniNerbian, a smaller version of NerbianRAT for Linux that’s used to backdoor servers running the Magento ecommerce server, primarily for use as command and control servers that devices infected by NerbianRAT connect to. Researchers elsewhere have reported encountering servers that appear to have been compromised with MiniNerbian, but Checkpoint Research appears to have been the first to identify the underlying binary.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A burglar with a flashlight and papers in a business office—exactly like scraping files from Discord. (credit: Getty Images)

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected “botnet-like” activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney’s official Discord channel.

Prompts are the written instructions (like “a cat in a car holding a can of a beer”) used by generative AI models such as Midjourney and Stability AI’s Stable Diffusion 3 (SD3) to synthesize images. Having prompt and image pairs could potentially help the training or fine-tuning of a rival AI image generator model.

Bot activity that took place around midnight on March 2 caused a 24-hour outage for the commercial image generator service. Midjourney linked several paid accounts with a Stability AI data team employee trying to “grab prompt and image pairs.” Midjourney then made a decision to ban all Stability AI employees from the service indefinitely. It also indicated a new policy: “aggressive automation or taking down the service results in banning all employees of the responsible company.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco. (credit: Getty Images)

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company’s board of directors and released the results of an independent review of the events surrounding CEO Sam Altman’s surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.

The newly appointed board members are Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart. These additions notably bring three women to the board after OpenAI met criticism about its restructured board composition last year. In addition, Sam Altman has rejoined the board.

The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman’s abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. “WilmerHale… found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course. (credit: Getty Images)

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today’s AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Microsoft said that Kremlin-backed hackers who breached its corporate network in January have expanded their access since then in follow-on attacks that are targeting customers and have compromised the company’s source code and internal systems.

The intrusion, which the software company disclosed in January, was carried out by Midnight Blizzard, the name used to track a hacking group widely attributed to the Federal Security Service, a Russian intelligence agency. Microsoft said at the time that Midnight Blizzard gained access to senior executives’ email accounts for months after first exploiting a weak password in a test device connected to the company’s network. Microsoft went on to say it had no indication any of its source code or production systems had been compromised.

Secrets sent in email

In an update published Friday, Microsoft said it has uncovered evidence that Midnight Blizzard has used the information it gained initially to further push into its network and compromise both source code and internal systems. The hacking group—which is tracked under multiple other names including APT29, Cozy Bear, CozyDuke, The Dukes, Dark Halo, and Nobelium—has been using the proprietary information in follow-on attacks, not only against Microsoft but also its customers.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail