Category:

Editor’s Pick

Enlarge / Former US President Donald Trump during a campaign event at Trump National Doral Golf Club in Miami, Florida, US, on Tuesday, July 9, 2024. (credit: Getty Images)

Allies of former President Donald Trump have reportedly drafted a sweeping AI executive order that aims to boost military technology and reduce regulations on AI development, The Washington Post reported. The plan, which includes a section titled “Make America First in AI,” signals a dramatic potential shift in AI policy if Trump returns to the White House in 2025.

The draft order, obtained by the Post, outlines a series of “Manhattan Projects” to advance military AI capabilities. It calls for an immediate review of what it terms “unnecessary and burdensome regulations” on AI development. The approach marks a contrast to the Biden administration’s executive order from last October, which imposed new safety testing requirements on advanced AI systems.

The proposed order suggests creating “industry-led” agencies to evaluate AI models and safeguard systems from foreign threats. This approach would likely benefit tech companies already collaborating with the Pentagon on AI projects, such as Palantir, Anduril, and Scale AI. Executives from these firms have reportedly expressed support for Trump.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Rite Aid logo displayed at one of its stores. (credit: Getty Images)

Rite Aid, the third biggest US drug store chain, said that more than 2.2 million of its customers have been swept into a data breach that stole personal information, including driver’s license numbers, addresses, and dates of birth.

The company said in mandatory filings with the attorneys general of states including Maine, Massachusetts, Vermont, and Oregon that the stolen data was associated with purchases or attempted purchases of retail products made between June 6, 2017, and July 30, 2018. The data provided included the purchaser’s name, address, date of birth, and driver’s license number or other form of government-issued ID. No social security numbers, financial information, or patient information was included.

“On June 6, 2024, an unknown third party impersonated a company employee to compromise their business credentials and gain access to certain business systems,” the filing stated. “We detected the incident within 12 hours and immediately launched an internal investigation to terminate the unauthorized access, remediate affected systems and ascertain if any customer data was impacted.”

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, former OpenAI researcher Andrej Karpathy announced the formation of a new AI learning platform called Eureka Labs. The venture aims to create an “AI native” educational experience, with its first offering focused on teaching students how to build their own large language model (LLM).

“It’s still early days but I wanted to announce the company so that I can build publicly instead of keeping a secret that isn’t,” Karpathy wrote on X.

While the idea of using AI in education isn’t particularly new, Karpathy’s approach hopes to pair expert-designed course materials with an AI-powered teaching assistant based on an LLM, aiming to provide personalized guidance at scale. This combination seeks to make high-quality education more accessible to a global audience.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: BeeBright / Getty Images / iStockphoto)

Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers’ computers when executed.

The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon’s S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js. That file provided what appeared to be benign code and three JPG images that were processed during package installation. One of those images contained code fragments that, when reconstructed, formed code for backdooring the developer device.

Growing sophistication

“We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days,” researchers from Phylum, the security firm that spotted the packages, wrote. “This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California. (credit: Getty Images)

During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) “scaling laws” will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

“Despite what other people think, we’re not at diminishing marginal returns on scale up,” Scott said during the interview. “And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.”

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities, without necessarily requiring fundamental algorithmic breakthroughs.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Google is making it easier for people to lock down their accounts with strong multifactor authentication by adding the option to store secure cryptographic keys in the form of passkeys rather than on physical token devices.

Google’s Advanced Protection Program, introduced in 2017, requires the strongest form of multifactor authentication (MFA). Whereas many forms of MFA rely on one-time passcodes sent through SMS or emails or generated by authenticator apps, accounts enrolled in advanced protection require MFA based on cryptographic keys stored on a secure physical device. Unlike one-time passcodes, security keys stored on physical devices are immune to credential phishing and can’t be copied or sniffed.

Democratizing APP

APP, short for Advanced Account Protection, requires the key to be accompanied by a password whenever a user logs into an account on a new device. The protection prevents the types of account takeovers that allowed Kremlin-backed hackers to access the Gmail accounts of Democratic officials in 2016 and go on to leak stolen emails to interfere with the presidential election that year.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image of “Miss AI” award winner “Kenza Layli” (left) and an unidentified AI-generated woman beside her. (credit: Kenza.Layli / Instagram)

An influencer platform called Fanvue recently announced the results of its first “Miss AI” pageant, which sought to judge AI-generated social media influencers and also doubled as a convenient publicity stunt. The “winner” is a fictional Instagram influencer from Morocco named Kenza Layli with more than 200,000 followers, but the pageant is already attracting criticism from women in the AI space.

“Yet another stepping stone on the road to objectifying women with AI,” Hugging Face AI researcher Dr. Sasha Luccioni told Ars Technica. “As a woman working in this field, I’m unsurprised but disappointed.”

Instances of AI-generated Instagram influencers have reportedly been on the rise since freely available image synthesis tools like Stable Diffusion have made it easy to generate an unlimited quantity of provocative images of women on demand. And techniques like Dreambooth allow fine-tuning an AI model on a specific subject (including an AI-generated one) to place it in different settings.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

More than 1.5 million email servers are vulnerable to attacks that can deliver executable attachments to user accounts, security researchers said.

The servers run versions of the Exim mail transfer agent that are vulnerable to a critical vulnerability that came to light 10 days ago. Tracked as CVE-2024-39929 and carrying a severity rating of 9.1 out of 10, the vulnerability makes it trivial for threat actors to bypass protections that normally prevent the sending of attachments that install apps or execute code. Such protections are a first line of defense against malicious emails designed to install malware on end-user devices.

A serious security issue

“I can confirm this bug,” Exim project team member Heiko Schlittermann wrote on a bug-tracking site. “It looks like a serious security issue to me.”

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time – AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail