Category:

Editor’s Pick

Enlarge (credit: Getty Images)

Japan-based IT behemoth Fujitsu said it has discovered malware on its corporate network that may have allowed the people responsible to steal personal information from customers or other parties.

“We confirmed the presence of malware on several of our company’s work computers, and as a result of an internal investigation, it was discovered that files containing personal information and customer information could be illegally taken out,” company officials wrote in a March 15 notification that went largely unnoticed until Monday. The company said it continued to “investigate the circumstances surrounding the malware’s intrusion and whether information has been leaked.” There was no indication how many records were exposed or how many people may be affected.

Fujitsu employs 124,000 people worldwide and reported about $25 billion in its fiscal 2023, which ended at the end of last March. The company operates in 100 countries. Past customers include the Japanese government. Fujitsu’s revenue comes from sales of hardware such as computers, servers, and telecommunications gear, storage systems, software, and IT services.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

Starting in May, Dell employees who are fully remote will not be eligible for promotion, Business Insider (BI) reported Saturday. The upcoming policy update represents a dramatic reversal from Dell’s prior stance on work from home (WFH), which included CEO Michael Dell saying: “If you are counting on forced hours spent in a traditional office to create collaboration and provide a feeling of belonging within your organization, you’re doing it wrong.”

Dell employees will mostly all be considered “remote” or “hybrid” starting in May, BI reported. Hybrid workers have to come into the office at least 39 days per quarter, Dell confirmed to Ars Technica, which equates to approximately three times a week. Those who would prefer to never commute to an office will not “be considered for promotion, or be able to change roles,” BI reported.

“For remote team members, it is important to understand the trade-offs: Career advancement, including applying to new roles in the company, will require a team member to reclassify as hybrid onsite,” Dell’s memo to workers said, per BI.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1. (credit: xAI)

On Sunday, Elon Musk’s AI firm xAI released the base model weights and network architecture of Grok-1, a large language model designed to compete with the models that power OpenAI’s ChatGPT. The open-weights release through GitHub and BitTorrent comes as Musk continues to criticize (and sue) rival OpenAI for not releasing its AI models in an open way.

Announced in November, Grok is an AI assistant similar to ChatGPT that is available to X Premium+ subscribers who pay $16 a month to the social media platform formerly known as Twitter. At its heart is a mixture-of-experts LLM called “Grok-1,” clocking in at 314 billion parameters. As a reference, GPT-3 included 175 billion parameters. Parameter count is a rough measure of an AI model’s complexity, reflecting its potential for generating more useful responses.

xAI is releasing the base model of Grok-1, which is not fine-tuned for a specific task, so it is likely not the same model that X uses to power its Grok AI assistant. “This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023,” writes xAI on its release page. “This means that the model is not fine-tuned for any specific application, such as dialogue,” meaning it’s not necessarily shipping as a chatbot. But it will do next-token prediction, meaning it will complete a sentence (or other text prompt) with its estimation of the most relevant string of text.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Some ASCII art of our favorite visual cliche for a hacker. (credit: Getty Images)

Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

ASCII art became popular in the 1970s, when the limitations of computers and printers prevented them from displaying images. As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII. The explosion of bulletin board systems in the 1980s and 1990s further popularized the format.

@_____
_____)| /
/(“””)o o
||*_-||| /
= / | /
___) (__| /
/ _/##|/
| | ###|/
| |\###&&&&
| (_###&&&&&>
(____|(B&&&&
++++&&&/
###(O)###
####AAA####
####AAA####
###########
###########
###########
|_} {_|
|_| |_|
| | | |
ScS| | | |
|_| |_|
(__) (__)

_._
. .–.
\ //\
.\ ///_\\
:/>` /(| `|’\
Y/ )))_-_/((
./’_/ ” _`)
.-” ._ /
_.-” (_ Y/ _) |
” )” | “”/||
.-‘ .’ / ||
/ ` / ||
| __ : ||_
| / ‘|`
| |
| | `.
| |
| |
| |
| |
/__ |__
/.| DrS. |._
`-” “–‘

Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide responses that could cause harm to the user or others or further a crime or unethical behavior. Prompting any of them, for example, to explain how to make and circulate counterfeit currency is a no-go. So are instructions on hacking an Internet of Things device, such as a surveillance camera or Internet router.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / View of Le Plateau and Ebrie Lagoon from the top of the Cathedrale St-Paul in Côte d’Ivoire (Ivory Coast), one of the affected countries. (credit: Getty)

Thirteen countries across Africa experienced Internet outages on Thursday due to damage to submarine fiber optic cables. Some countries, including Ghana and Nigeria, are still suffering from nationwide outages.

Multiple network providers reported Internet outages yesterday, and Cloudflare’s Radar tool, which monitors Internet usage patterns, detailed how the outage seemingly moved from the northern part of West Africa to South Africa. All 13 countries (Benin, Burkina Faso, Cameroon, Côte d’Ivoire, Ghana, Guinea, Liberia, Namibia, Niger, Nigeria, South Africa, The Gambia, and Togo) reportedly suffered nationwide outages, with most seeing multiple networks hit.

Some countries’ Internet disruptions were short-lived, such as in Gambia and Guinea, as they lasted for 30 minutes, per Cloudflare. Other outages, like in South Africa (five hours) were longer, and some remain ongoing. As of this writing, Cloudflare reports that six countries, including Benin, Burkina Faso, Cameroon, and Côte d’Ivoire, are still suffering outages.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

It seems like AI large language models (LLMs) are everywhere these days due to the rise of ChatGPT. Now, a software developer named Ishan Anand has managed to cram a precursor to ChatGPT called GPT-2—originally released in 2019 after some trepidation from OpenAI—into a working Microsoft Excel spreadsheet. It’s freely available and is designed to educate people about how LLMs work.

“By using a spreadsheet anyone (even non-developers) can explore and play directly with how a ‘real’ transformer works under the hood with minimal abstractions to get in the way,” writes Anand on the official website for the sheet, which he calls “Spreadsheets-are-all-you-need.” It’s a nod to the 2017 research paper “Attention is All You Need” that first described the Transformer architecture that has been foundational to how LLMs work.

Anand packed GPT-2 into an XLSB Microsoft Excel binary file format, and it requires the latest version of Excel to run (but won’t work on the web version). It’s completely local and doesn’t do any API calls to cloud AI services.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Justin Sullivan )

Broadcom CEO and President Hock Tan has acknowledged the discomfort VMware customers and partners have experienced after the sweeping changes that Broadcom has instituted since it acquired the virtualization company 114 days ago.

In a blog post Thursday, Tan noted that Broadcom spent 18 months evaluating and buying VMware. He said that while there’s still a lot of work to do, the company has made “substantial progress.”

That so-called progress, though, has worried some of Broadcom’s customers and partners.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson | Getty Images)

Previously, on “Weekend Projects for Homelab Admins With Control Issues,” we created our own dynamically updating DNS and DHCP setup with bind and dhcpd. We laughed. We cried. We hurled. Bonds were forged, never to be broken. And I hope we all took a little something special away from the journey—namely, a dynamically updating DNS and DHCP setup. Which we’re now going to put to use!

If you’re joining us fresh, without having gone through the previous part and wanting to follow this tutorial, howdy! There might be some parts that are more difficult to complete without a local instance of bind (or other authoritative resolver compatible with nsupdate). We’ll talk more about this when we get there, but just know that if you want to pause and go do part one first, you may have an easier time following along.

The quick version: A LetsEncrypt of our own

This article will walk through the process of installing step-ca, a standalone certificate authority-in-a-box. We’ll then configure step-ca with an ACME provisioner—that’s Automatic Certificate Management Environment, the technology that underpins LetsEncrypt and facilitates the automatic provisioning, renewal, and revocation of SSL/TLS certificates.

Read 118 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Charles O’Rear)

A dual Canadian-Russian national has been sentenced to four years in prison for his role in infecting more than 1,000 victims with the LockBit ransomware and then extorting them for tens of millions of dollars.

Mikhail Vasiliev, a 33-year-old who most recently lived in Ontario, Canada, was arrested in November 2022 and charged with conspiring to infect protected computers with ransomware and sending ransom demands to victims. Last month, he pleaded guilty to eight counts of cyber extortion, mischief, and weapons charges.

During an October 2022 raid on Vasiliev’s Bradford, Ontario home, Canadian law enforcement agents found Vasiliev working on a laptop that displayed a login screen to the LockBit control panel, which members used to carry out attacks. The investigators also found a seed phrase credential for a bitcoin wallet address that was linked to a different wallet that had received a payment from a victim that had been infected and extorted by LockBit.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson | Getty Images)

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Token privacy

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Read 36 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail