Category:

Editor’s Pick

Enlarge (credit: Getty Images)

One of the world’s most active ransomware groups has taken an unusual—if not unprecedented—tactic to pressure one of its victims to pay up: reporting the victim to the US Securities and Exchange Commission.

The pressure tactic came to light in a post published on Wednesday on the dark web site run by AlphV, a ransomware crime syndicate that’s been in operation for two years. After first claiming to have breached the network of the publicly traded digital lending company MeridianLink, AlphV officials posted a screenshot of a complaint it said it filed with the SEC through the agency’s website. Under a recently adopted rule that goes into effect next month, publicly traded companies must file an SEC disclosure within four days of learning of a security incident that had a “material” impact on their business.

“We want to bring to your attention a concerning issue regarding MeridianLink’s compliance with the recently adopted cybersecurity incident disclosure rules,” AlphV officials wrote in the complaint. “It has come to our attention that MeridianLink, in light of a significant breach compromising customer data and operational information, has failed to file the requisite disclosure under item 1.05 of form 8-K within the stipulated four business days, as mandated by the new SEC rules.”

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A shot of tldraw’s “Make it Real” in action, provided by Ashe on X: “Ok…@tldraw
is super fun. I iterated through ~10 builds today and it cost me $0.90 using GPT4. The pong game is playable as described.” (credit: Ashe Oro)

On Wednesday, a collaborative whiteboard app maker called “tldraw” made waves online by releasing a prototype of a feature called “Make it Real” that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI’s GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout.

“I think I need to go lie down,” posted designer Kevin Cannon at the start of a viral X thread that featured the creation of functioning sliders that rotate objects on screen, an interface for changing object colors, and a working game of tic-tac-toe. Soon, others followed with demonstrations of drawing a clone of Breakout, creating a working dial clock that ticks, drawing the snake game, making a Pong game, interpreting a visual state chart, and much more.

Users can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk. If others intercept your API key, they could use it to rack up a very large bill in your name (OpenAI charges by the amount of data moving into and out of its API). Those technically inclined can run the code locally, but it will still require OpenAI API access.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Screen capture from a demo video of an AI-generated unauthorized David Attenborough voice narrating a developer’s video feed. (credit: Charlie Holtz)

On Wednesday, Replicate developer Charlie Holtz combined GPT-4 Vision (commonly called GPT-4V) and ElevenLabs voice cloning technology to create an unauthorized AI version of the famous naturalist David Attenborough narrating Holtz’s every move on camera. As of Thursday afternoon, the X post describing the stunt had garnered over 21,000 likes.

“Here we have a remarkable specimen of Homo sapiens distinguished by his silver circular spectacles and a mane of tousled curly locks,” the false Attenborough says in the demo as Holtz looks on with a grin. “He’s wearing what appears to be a blue fabric covering, which can only be assumed to be part of his mating display.”

“Look closely at the subtle arch of his eyebrow,” it continues, as if narrating a BBC wildlife documentary. “It’s as if he’s in the midst of an intricate ritual of curiosity or skepticism. The backdrop suggests a sheltered habitat, possibly a communal feeding area or watering hole.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Users in the extended European Economic Area will soon be able to avoid most of the things that feel so exhausting about Windows 11. (credit: Andrew Cunningham)

Using Windows these days means putting up with many, many pitches to use and purchase other Microsoft products. Some are subtle, like the built-in Edge browser suggesting you use its “recommended settings” after each major update. Some are not so subtle, like testing a “quiz” that made some users explain why they’re trying to quit the OneDrive app.

Those living in the European Economic Area (EEA)—which includes the EU and adds Iceland, Liechtenstein, and Norway—will soon get the volume turned down on their Windows 11 systems. To meet the demands of the European Commission’s Digital Markets Act—slated to be enforced in March 2024—Microsoft must make its apps easier to uninstall, its default settings easier to change, and its attempts at steering people toward its services easier to avoid.

Microsoft writes in a blog post that many of these changes will be available in a preview update of Windows 11 (version 23H2) this month. Windows 10 will get similar changes “at a later date.” A couple of changes affect all Windows 10 and 11 users:

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / If you have a bunch of Windows systems, Microsoft now has an app for that. It’s called “Windows App.” Microsoft just has a certain way with naming things. (credit: Microsoft)

It feels strange to say it, but it’s true: There is an app called, simply, “Windows.” It’s available for early testing on Mac, iOS and iPad, the web, Windows, and eventually Android, and it’s made by Microsoft. The fact that it exists, with such a strong and simple name, says something larger than the rather plain and starting-stage app it is now.

“Windows App,” as named by Microsoft in a rare bit of minimalism, is essentially a convenient remote desktop connection to a Windows OS on a physical system, an Azure virtual desktop, a Dev Box, or elsewhere. There are some other tricks you can pull off, too, like using your local device’s webcam, speakers, and printer connections with your remote Windows system. But you can easily read a “Windows app” for multiple platforms, including web browsers generally, as being the next step in Microsoft’s slow march toward making a virtual Windows OS something that seems convenient for everybody, whether on a business or personal account.

At the moment, you need a work or school account with Microsoft to use most of the features beyond a traditional remote desktop connection. To use a remote desktop connection, the Windows instance you’re connecting to must be running a Pro edition, as Home lacks the ability to host a remote desktop connection. There are, of course, many ways to connect to a remote PC from nearly any device, including RealVNC and others.

Read 2 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A composite of three DALL-E 3 AI art generations: an oil painting of Hercules fighting a shark, a photo of the queen of the universe, and a marketing photo of “Marshmallow Menace” cereal. (credit: DALL-E 3 / Benj Edwards)

In October, OpenAI launched its newest AI image generator—DALL-E 3—into wide release for ChatGPT subscribers. DALL-E can pull off media generation tasks that would have seemed absurd just two years ago—and although it can inspire delight with its unexpectedly detailed creations, it also brings trepidation for some. Science fiction forecast tech like this long ago, but seeing machines upend the creative order feels different when it’s actually happening before our eyes.

“It’s impossible to dismiss the power of AI when it comes to image generation,” says Aurich Lawson, Ars Technica’s creative director. “With the rapid increase in visual acuity and ability to get a usable result, there’s no question it’s beyond being a gimmick or toy and is a legit tool.”

With the advent of AI image synthesis, it’s looking increasingly like the future of media creation for many will come through the aid of creative machines that can replicate any artistic style, format, or medium. Media reality is becoming completely fluid and malleable. But how is AI image synthesis getting more capable so rapidly—and what might that mean for artists ahead?

Read 43 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Victor De Schwanberg/Science Photo Library via Getty Images)

Despite more than a decade of reminding, prodding, and downright nagging, a surprising number of developers still can’t bring themselves to keep their code free of credentials that provide the keys to their kingdoms to anyone who takes the time to look for them.

The lapse stems from immature coding practices in which developers embed cryptographic keys, security tokens, passwords, and other forms of credentials directly into the source code they write. The credentials make it easy for the underlying program to access databases or cloud services necessary for it to work as intended. I published one such PSA in 2013 after discovering simple searches that turned up dozens of accounts that appeared to expose credentials securing computer-to-server SSH accounts. One of the credentials appeared to grant access to an account on Chromium.org, the repository that stores the source code for Google’s open source browser.

In 2015, Uber learned the hard way just how damaging the practice can be. One or more developers for the ride service had embedded a unique security key into code and then shared that code on a public GitHub page. Hackers then copied the key and used it to access an internal Uber database and, from there, steal sensitive data belonging to 50,000 Uber drivers.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A photo of the Microsoft Azure Maia 100 chip that has been altered with splashes of color by the author to look as if AI itself were bursting forth from its silicon substrate. (credit: Microsoft | Benj Edwards)

On Wednesday at the Microsoft Ignite conference, Microsoft announced two custom chips designed for accelerating in-house AI workloads through its Azure cloud computing service: Microsoft Azure Maia 100 AI Accelerator and the Microsoft Azure Cobalt 100 CPU.

Microsoft designed Maia specifically to run large language models like GPT 3.5 Turbo and GPT-4, which underpin its Azure OpenAI services and Microsoft Copilot (formerly Bing Chat). Maia has 105 billion transistors that are manufactured on a 5-nm TSMC process. Meanwhile, Cobalt is a 128-core ARM-based CPU designed to do conventional computing tasks like power Microsoft Teams. Microsoft has no plans to sell either one, preferring them for internal use only.

As we’ve previously seen, Microsoft wants to be “the Copilot company,” and it will need a lot of computing power to meet that goal. According to Reuters, Microsoft and other tech firms have struggled with the high cost of delivering AI services that can cost 10 times more than services like search engines.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The Microsoft Copilot logo. (credit: Microsoft)

On Wednesday, Microsoft announced that Bing Chat—its famously once-unhinged AI chatbot—has been officially renamed “Microsoft Copilot.” The company also announced it will support OpenAI’s recently released GPTs, which are custom roles for its ChatGPT AI assistant.

The rebranding move consolidates Bing Chat into Microsoft’s somewhat confusing “Copilot” AI assistant naming scheme, which has a lineage that began with GitHub Copilot in 2021. In March this year, Microsoft announced Dynamics 365 Copilot, Copilot in Windows, Microsoft Security Copilot, and Microsoft 365 Copilot. Now Bing Chat is just “Microsoft Copilot”—its sixth copilot so far. Pretty soon, Microsoft will need a Branding Copilot to keep them all straight.

Regarding the naming scheme, Microsoft customer Amit Malik took to X and wrote, “I love Microsoft, but this whole copilot thing is becoming more confusing than it should be. Microsoft Copilot, Windows Copilot, M365 Copilot, then all the m365 apps, D365 copilot and so on. AI was supposed to simplify, not otherwise.” Note that Malik wrote that in September—nearly two months before the recent announcement.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, YouTube announced it will soon implement stricter measures on realistic AI-generated content hosted by the service. “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” the company wrote in a statement. The changes will roll out over the coming months and into next year.

The move by YouTube comes as part of a series of efforts by the platform to address challenges posed by generative AI in content creation, including deepfakes, voice cloning, and disinformation. When creators upload content, YouTube will provide new options to indicate if the content includes realistic AI-generated or AI-altered material. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” YouTube writes.

In the detailed announcement, Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, explained that the policy update aims to maintain a positive ecosystem in the face of generative AI. “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube,” they write. “We have long-standing policies that prohibit technically manipulated content that misleads viewers … However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail