Category:

Editor’s Pick

Enlarge (credit: Getty)

Broadcom has moved forward with plans to transition VMware, a virtualization and cloud computing company, into a subscription-based business. As of December 11, it no longer sells perpetual licenses with VMware products. VMware, whose $61 billion acquisition by Broadcom closed in November, also announced on Monday that it will no longer sell support and subscription (SnS) for VMware products with perpetual licenses. Moving forward, VMware will only offer term licenses or subscriptions, according to its VMware blog post.

VMware customers with perpetual licenses and active support contracts can continue using them. VMware “will continue to provide support as defined in contractual commitments,” Krish Prasad, senior vice president and general manager for VMware’s Cloud Foundation Division, wrote. But when customers’ SnS terms end, they won’t have any support.

Broadcom hopes this will force customers into subscriptions, and it’s offering “upgrade pricing incentives” that weren’t detailed in the blog for customers who switch from perpetual licensing to a subscription.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / The New Essential Guide to Electronics in Shenzen is made to be pointed at, rapidly, in a crowded environment. (credit: Machinery Enchantress / Crowd Supply)

“Hong Kong has better food, Shanghai has better nightlife. But when it comes to making things—no one can beat Shenzen.”

Many things about the Hua Qiang market in Shenzen, China, are different than they were in 2016, when Andrew “bunnie” Huang’s Essential Guide to Electronics in Shenzen was first published. But the importance of the world’s premiere electronics market, and the need for help navigating it, are a constant. That’s why the book is getting an authorized, crowdfunded revision, the New Essential Guide, written by noted maker and Shenzen native Naomi Wu and due to ship in April 2024.

Naomi Wu’s narrated introduction to the New Essential Guide to Electronics in Shenzen.

Huang notes on the crowdfunding page that Wu’s “strengths round out my weaknesses.” Wu speaks Mandarin, lives in Shenzen, and is more familiar with Shenzen, and China, as it is today. Shenzen has grown by more than 2 million people, the central Huaqiangbei Road has been replaced by a car-free boulevard, and the city’s metro system has more than 100 new kilometers with dozens of new stations. As happens anywhere, market vendors have also changed locations, payment and communications systems have modernized, and customs have shifted.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An illustration of a robot holding a French flag, figuratively reflecting the rise of AI in France due to Mistral. It’s hard to draw a picture of an LLM, so a robot will have to do. (credit: Getty Images)

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a “mixture of experts” (MoE) model with open weights that reportedly truly matches OpenAI’s GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI’s Andrej Karpathy and Jim Fan. That means we’re closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral’s models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context “weights” are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT, in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta’s much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI’s GPT-3.5 on certain benchmarks, as seen in the chart below.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A woman scans a QR code in a café to see the menu online.

The US Federal Trade Commission has become the latest organization to warn against the growing use of QR codes in scams that attempt to take control of smartphones, make fraudulent charges, or obtain personal information.

Short for quick response codes, QR codes are two-dimensional bar codes that automatically open a Web browser or app when they’re scanned using a phone camera. Restaurants, parking garages, merchants, and charities display them to make it easy for people to open online menus or to make online payments. QR codes are also used in security-sensitive contexts. YouTube, Apple TV, and dozens of other TV apps, for instance, allow someone to sign into their account by scanning a QR code displayed on the screen. The code opens a page on a browser or app of the phone, where the account password is already stored. Once open, the page authenticates the same account to be opened on the TV app. Two-factor authentication apps provide a similar flow using QR codes when enrolling a new account.

The ubiquity of QR codes and the trust placed in them hasn’t been lost on scammers, however. For more than two years now, parking lot kiosks that allow people to make payments through their phones have been a favorite target. Scammers paste QR codes over the legitimate ones. The scam QR codes lead to look-alike sites that funnel funds to fraudulent accounts rather than the ones controlled by the parking garage.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Benj Edwards)

In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more “lazy,” reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it’s an issue, but the company isn’t sure why. The answer may be what some are calling “winter break hypothesis.” While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.

“We’ve heard all your feedback about GPT4 getting lazier!” tweeted the official ChatGPT account on Thursday. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

On Friday, an X account named Martian openly wondered if LLMs might simulate seasonal depression. Later, Mike Swoopskee tweeted, “What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?”

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

AI regulation will begin in the EU

by

Enlarge / EU Commissioner Thierry Breton talks to media during a press conference in June. (credit: Thierry Monasse | Getty Images)

European Union lawmakers have agreed the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology.

Thierry Breton, EU commissioner, confirmed in a post on X that a deal had been reached.

He called it a historic agreement. “The EU becomes the very first continent to set clear rules for the use of AI,” he wrote. “The AIAct is much more than a rulebook—it’s a launchpad for EU start-ups and researchers to lead the global AI race.”

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge

Stealthy and multifunctional Linux malware that has been infecting telecommunications companies went largely unnoticed for two years until being documented for the first time by researchers on Thursday.

Researchers from security firm Group-IB have named the remote access trojan “Krasue,” after a nocturnal spirit depicted in Southeast Asian folklore “floating in mid-air, with no torso, just her intestines hanging from below her chin.” The researchers chose the name because evidence to date shows it almost exclusively targets victims in Thailand and “poses a severe risk to critical systems and sensitive data given that it is able to grant attackers remote access to the targeted network.

According to the researchers:

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A still from Google’s misleading Gemini AI promotional video, released Wednesday. (credit: Google)

Google is facing controversy among AI experts for a deceptive Gemini promotional video released Wednesday that appears to show its new AI model recognizing visual cues and interacting vocally with a person in real time. As reported by Parmy Olson for Bloomberg, Google has admitted that was not the case. Instead, the researchers fed still images to the model and edited together successful responses, partially misrepresenting the model’s capabilities.

“We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges,” a spokesperson said. “Then we prompted Gemini using still image frames from the footage, & prompting via text,” a Google spokesperson told Olson. As Olson points out, Google filmed a pair of human hands doing activities, then showed still images to Gemini Ultra, one by one. Google researchers interacted with the model through text, not voice, then picked the best interactions and edited them together with voice synthesis to make the video.

Right now, running still images and text through massive large language models is computationally intensive, which makes real-time video interpretation largely impractical. That was one of the clues that first led AI experts to believe the video was misleading.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Three images generated by “Imagine with Meta AI” using the Emu AI model. (credit: Meta | Benj Edwards)

On Wednesday, Meta released a free standalone AI image generator website, “Imagine with Meta AI,” based on its Emu image synthesis model. Meta used 1.1 billion publicly visible Facebook and Instagram images to train the AI model, which can render a novel image from a written prompt. Previously, Meta’s version of this technology—using the same data—was only available in messaging and social networking apps such as Instagram.

If you’re on Facebook or Instagram, it’s quite possible a picture of you (or that you took) helped train Emu. In a way, the old saying, “If you’re not paying for it, you are the product” has taken on a whole new meaning. Although, as of 2016, Instagram users uploaded over 95 million photos a day, so the dataset Meta used to train its AI model was a small subset of its overall photo library.

Since Meta says it only uses publicly available photos for training, setting your photos private on Instagram or Facebook should prevent their inclusion in the company’s future AI model training (unless it changes that policy, of course).

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail