Category:

Editor’s Pick

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California. (credit: Getty Images)

During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) “scaling laws” will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

“Despite what other people think, we’re not at diminishing marginal returns on scale up,” Scott said during the interview. “And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.”

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities, without necessarily requiring fundamental algorithmic breakthroughs.

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Google is making it easier for people to lock down their accounts with strong multifactor authentication by adding the option to store secure cryptographic keys in the form of passkeys rather than on physical token devices.

Google’s Advanced Protection Program, introduced in 2017, requires the strongest form of multifactor authentication (MFA). Whereas many forms of MFA rely on one-time passcodes sent through SMS or emails or generated by authenticator apps, accounts enrolled in advanced protection require MFA based on cryptographic keys stored on a secure physical device. Unlike one-time passcodes, security keys stored on physical devices are immune to credential phishing and can’t be copied or sniffed.

Democratizing APP

APP, short for Advanced Account Protection, requires the key to be accompanied by a password whenever a user logs into an account on a new device. The protection prevents the types of account takeovers that allowed Kremlin-backed hackers to access the Gmail accounts of Democratic officials in 2016 and go on to leak stolen emails to interfere with the presidential election that year.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image of “Miss AI” award winner “Kenza Layli” (left) and an unidentified AI-generated woman beside her. (credit: Kenza.Layli / Instagram)

An influencer platform called Fanvue recently announced the results of its first “Miss AI” pageant, which sought to judge AI-generated social media influencers and also doubled as a convenient publicity stunt. The “winner” is a fictional Instagram influencer from Morocco named Kenza Layli with more than 200,000 followers, but the pageant is already attracting criticism from women in the AI space.

“Yet another stepping stone on the road to objectifying women with AI,” Hugging Face AI researcher Dr. Sasha Luccioni told Ars Technica. “As a woman working in this field, I’m unsurprised but disappointed.”

Instances of AI-generated Instagram influencers have reportedly been on the rise since freely available image synthesis tools like Stable Diffusion have made it easy to generate an unlimited quantity of provocative images of women on demand. And techniques like Dreambooth allow fine-tuning an AI model on a specific subject (including an AI-generated one) to place it in different settings.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

More than 1.5 million email servers are vulnerable to attacks that can deliver executable attachments to user accounts, security researchers said.

The servers run versions of the Exim mail transfer agent that are vulnerable to a critical vulnerability that came to light 10 days ago. Tracked as CVE-2024-39929 and carrying a severity rating of 9.1 out of 10, the vulnerability makes it trivial for threat actors to bypass protections that normally prevent the sending of attachments that install apps or execute code. Such protections are a first line of defense against malicious emails designed to install malware on end-user devices.

A serious security issue

“I can confirm this bug,” Exim project team member Heiko Schlittermann wrote on a bug-tracking site. “It looks like a serious security issue to me.”

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time – AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Threat actors carried out zero-day attacks that targeted Windows users with malware for more than a year before Microsoft fixed the vulnerability that made them possible, researchers said Tuesday.

The vulnerability, present in both Windows 10 and 11, causes devices to open Internet Explorer, a legacy browser that Microsoft decommissioned in 2022 after its aging code base made it increasingly susceptible to exploits. Following the move, Windows made it difficult, if not impossible, for normal actions to open the browser, which was first introduced in the mid-1990s.

Tricks old and new

Malicious code that exploits the vulnerability dates back to at least January 2023 and was circulating as recently as May this year, according to the researchers who discovered the vulnerability and reported it to Microsoft. The company fixed the vulnerability, tracked as CVE-2024-CVE-38112, on Tuesday as part of its monthly patch release program. The vulnerability, which resided in the MSHTML engine of Windows, carried a severity rating of 7.0 out of 10.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Akos Stiller/Bloomberg via Getty Images)

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

Read 16 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Benj Edwards / OpenAI / Microsoft)

Microsoft has withdrawn from its non-voting observer role on OpenAI’s board, while Apple has opted not to take a similar position, reports Axios and Financial Times. The ChatGPT maker plans to update its business partners and investors through regular meetings instead of board representation. The development comes as regulators in the EU and US increase their scrutiny of Big Tech’s investments in AI startups.

Axios reports that on Tuesday, Microsoft’s deputy general counsel, Keith Dolliver, sent a letter to OpenAI stating that the tech giant’s board role was “no longer necessary” given the “significant progress” made by the newly formed board. Microsoft had accepted a non-voting position on OpenAI’s board in November following the ouster and reinstatement of OpenAI CEO Sam Altman.

Last week, Bloomberg reported that Apple’s Phil Schiller, who leads the App Store and Apple Events, might join OpenAI’s board in an observer role as part of an AI deal. However, the Financial Times now reports that Apple will not take up such a position, citing a person with direct knowledge of the matter. Apple did not immediately respond to our request for comment.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Ukrainian President Volodymyr Zelensky speaks to the media at the 2024 Ukraine Recovery Conference on June 11, 2024 in Berlin. (credit: Sean Gallup/Getty Images)

In the space of 24 hours, a piece of Russian disinformation about Ukrainian president Volodymyr Zelensky’s wife buying a Bugatti car with American aid money traveled at warp speed across the internet. Though it originated from an unknown French website, it quickly became a trending topic on X and the top result on Google.

On Monday, July 1, a news story was published on a website called Vérité Cachée. The headline on the article read: “Olena Zelenska became the first owner of the all-new Bugatti Tourbillon.” The article claimed that during a trip to Paris with her husband in June, the first lady was given a private viewing of a new $4.8 million supercar from Bugatti and immediately placed an order. It also included a video of a man that claimed to work at the dealership.

But the video, like the website itself, was completely fake.

Read 13 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail