Category:

Editor’s Pick

One of the oldest maxims in hacking is that once an attacker has physical access to a device, it’s game over for its security. The basis is sound. It doesn’t matter how locked down a phone, computer, or other machine is; if someone intent on hacking it gains the ability to physically manipulate it, the chances of success are all but guaranteed.

In the age of cloud computing, this widely accepted principle is no longer universally true. Some of the world’s most sensitive information—health records, financial account information, sealed legal documents, and the like—now often resides on servers that receive day-to-day maintenance from unknown administrators working in cloud centers thousands of miles from the companies responsible for safeguarding it.

Bad (RAM) to the bone

In response, chipmakers have begun baking protections into their silicon to provide assurances that even if a server has been physically tampered with or infected with malware, sensitive data funneled through virtual machines can’t be accessed without an encryption key that’s known only to the VM administrator. Under this scenario, admins inside the cloud provider, law enforcement agencies with a court warrant, and hackers who manage to compromise the server are out of luck.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Monday, Reddit announced it would test an AI-powered search feature called “Reddit Answers” that uses an AI model to create summaries from existing Reddit posts to respond to user questions, reports Reuters.

The feature generates responses by searching through Reddit’s vast collection of community discussions and comments. When users ask questions, Reddit Answers provides summaries of relevant conversations and includes links to related communities and posts.

The move potentially puts Reddit in competition with traditional search engines like Google and newer AI search tools like those from OpenAI and Perplexity. But while other companies pull information from across the Internet, Reddit Answers focuses only on content within Reddit’s platform.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Monday, OpenAI released Sora Turbo, a new version of its text-to-video generation model, making it available to ChatGPT Plus and Pro subscribers through a dedicated website. The model generates videos up to 20 seconds long at resolutions reaching 1080p from a text or image prompt.

Open AI announced that Sora would be available today for ChatGPT Plus and Pro subscribers in the US and many parts of the world but is not yet available in Europe. As of early Monday afternoon, though, even existing Plus subscribers trying to use the tool are being presented with a message that “sign ups are temporarily unavailable” thanks to “heavy traffic.”

Out of an abundance of caution, OpenAI is limiting Sora’s ability to generate videos of people for the time being. At launch, uploads involving human subjects face restrictions while OpenAI refines its deepfake prevention systems. The platform also blocks content involving CSAM and sexual deepfakes. OpenAI says it maintains an active monitoring system and conducted testing to identify potential misuse scenarios before release.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Tuesday, the US Federal Bureau of Investigation advised Americans to share a secret word or phrase with their family members to protect against AI-powered voice-cloning scams, as criminals increasingly use voice synthesis to impersonate loved ones in crisis.

“Create a secret word or phrase with your family to verify their identity,” wrote the FBI in an official public service announcement (I-120324-PSA).

For example, you could tell your parents, children, or spouse to ask for a word or phrase to verify your identity if something seems suspicious, such as “The sparrow flies at midnight,” “Greg is the king of burritos,” or simply “flibbertigibbet.” (As fun as these sound, your password should be secret and not the same as these.)

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Broadcom will no longer take VMware’s biggest 2,000 customers directly. Instead, it will work with VMware’s 500 biggest customers, giving channel partners the opportunity to participate in deals and provide additional value for VMware customers. The reversal is being viewed as an effort from Broadcom to discourage migrations from VMware, but there’s skepticism around how much impact it will truly have.

Various customers have lamented the changes that succeeded Broadcom buying VMware about a year ago. Controversial moves have included ending perpetual license sales, bundling VMware products into a smaller number of SKUs, and ending VMware’s channel partner program. These changes have led some firms to consider reducing their business with VMware.

This week, for example, United Kingdom (UK)-headquartered cloud operator Beeks Group said that a 1,000 percent increase in VMware costs led to it moving most of its 20,000-plus virtual machines to OpenNebula. And numerous customers that Ars Technica has spoken with in the last year are seriously researching or planning total or partial VMware migrations.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Thursday during a live demo as part of its “12 days of OpenAI” event, OpenAI announced a new tier of ChatGPT with higher usage limits for $200 a month and the full version of “o1,” the full version of a so-called reasoning model the company debuted in September.

Unlike o1-preview, o1 can now process images as well as text (similar to GPT-4o), and it is reportedly much faster than o1-preview. In a demo question about a Roman emperor, o1 took 14 seconds for an answer, and 1 preview took 33 seconds. According to OpenAI, o1 makes major mistakes 34% less often than o1-preview, while “thinking” 50% faster. The model will also reportedly become even faster once deployment is finished transitioning the GPUs to the new model.

12 Days of OpenAI: Day 1 video

Whether the new ChatGPT Pro subscription will be worth the $200 a month fee isn’t yet fully clear, but the company did specify that users will have access to an even more capable version of o1 called “o1 Pro Mode” that will do even deeper reasoning searches and provide “more thinking power for more difficult problems” before answering.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death. For example, can their AI models be used to guide weapons or make targeting decisions? Different companies have answered this question in different ways, but for ChatGPT maker OpenAI, what started as a hard line against weapons development and military applications has slipped away over time.

On Wednesday, defense-tech company Anduril Industries—started by Oculus founder Palmer Luckey in 2017—announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.

The companies say their AI models will process data to reduce the workload on humans. “As part of the new initiative, Anduril and OpenAI will explore how leading-edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness,” Anduril said in a statement.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

In recent years, commercial spyware has been deployed by more actors against a wider range of victims, but the prevailing narrative has still been that the malware is used in targeted attacks against an extremely small number of people. At the same time, though, it has been difficult to check devices for infection, leading individuals to navigate an ad hoc array of academic institutions and NGOs that have been on the front lines of developing forensic techniques to detect mobile spyware. On Tuesday, the mobile device security firm iVerify is publishing findings from a spyware detection feature it launched in May. Of 2,500 device scans that the company’s customers elected to submit for inspection, seven revealed infections by the notorious NSO Group malware known as Pegasus.

The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool regularly checks devices for potential compromise. But the company also offers a free version of the feature for anyone who downloads the iVerify Basics app for $1. These users can walk through steps to generate and send a special diagnostic utility file to iVerify and receive analysis within hours. Free users can use the tool once a month. iVerify’s infrastructure is built to be privacy-preserving, but to run the Mobile Threat Hunting feature, users must enter an email address so the company has a way to contact them if a scan turns up spyware—as it did in the seven recent Pegasus discoveries.

“The really fascinating thing is that the people who were targeted were not just journalists and activists, but business leaders, people running commercial enterprises, people in government positions,” says Rocky Cole, chief operating officer of iVerify and a former US National Security Agency analyst. “It looks a lot more like the targeting profile of your average piece of malware or your average APT group than it does the narrative that’s been out there that mercenary spyware is being abused to target activists. It is doing that, absolutely, but this cross section of society was surprising to find.”

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

Hackers pocketed as much as $155,000 by sneaking a backdoor into a code library used by developers of smart contract apps that work with the cryptocurrency known as Solana.

The supply-chain attack targeted solana-web3.js, a collection of JavaScript code used by developers of decentralized apps for interacting with the Solana blockchain. These “dapps” allow people to sign smart contracts that, in theory, operate autonomously in executing currency trades among two or more parties when certain agreed-upon conditions are met.

The backdoor came in the form of code that collected private keys and wallet addresses when apps that directly handled private keys incorporated solana-web3.js versions 1.95.6 and 1.95.7. These backdoored versions were available for download during a five-hour window between 3:20 pm UTC and 8:25 pm UTC on Tuesday.

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail

On Wednesday, OpenAI CEO Sam Altman announced a “12 days of OpenAI” period starting December 5, which will unveil new AI features and products for 12 consecutive weekdays.

Altman did not specify the exact features or products OpenAI plans to unveil, but a report from The Verge about this “12 days of shipmas” event suggests the products may include a public release of the company’s text-to-video model Sora and a new “reasoning” AI model similar to o1-preview. Perhaps we may even see DALL-E 4 or a new image generator based on GPT-4o’s multimodal capabilities.

Altman’s full tweet included hints at releases both big and small:

Read full article

Comments

0 comment
0 FacebookTwitterPinterestEmail