Category:

Editor’s Pick

Enlarge (credit: Getty Images)

Google has updated its Chrome browser to patch a high-severity zero-day vulnerability that allows attackers to execute malicious code on end user devices. The fix marks the fifth time this year the company has updated the browser to protect users from an existing malicious exploit.

The vulnerability, tracked as CVE-2024-4671, is a “use after free,” a class of bug that occurs in C-based programming languages. In these languages, developers must allocate memory space needed to run certain applications or operations. They do this by using “pointers” that store the memory addresses where the required data will reside. Because this space is finite, memory locations should be deallocated once the application or operation no longer needs it.

Use-after-free bugs occur when the app or process fails to clear the pointer after freeing the memory location. In some cases, the pointer to the freed memory is used again and points to a new memory location storing malicious shellcode planted by an attacker’s exploit, a condition that will result in the execution of this code.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Monday, Stack Overflow and OpenAI announced a new API partnership that will integrate Stack Overflow’s technical content with OpenAI’s ChatGPT AI assistant. However, the deal has sparked controversy among Stack Overflow’s user community, with many expressing anger and protest over the use of their contributed content to support and train AI models.

“I hate this. I’m just going to delete/deface my answers one by one,” wrote one user on sister site Stack Exchange. “I don’t care if this is against your silly policies, because as this announcement shows, your policies can change at a whim without prior consultation of your stakeholders. You don’t care about your users, I don’t care about you.”

Stack Overflow is a popular question-and-answer site for software developers that allows users to ask and answer technical questions related to coding. The site has a large community of developers who contribute knowledge and expertise to help others solve programming problems. Over the past decade, Stack Overflow has become a heavily utilized resource for many developers seeking solutions to common coding challenges.

Read 6 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

For years, Dell customers have been on the receiving end of scam calls from people claiming to be part of the computer maker’s support team. The scammers call from a valid Dell phone number, know the customer’s name and address, and use information that should be known only to Dell and the customer, including the service tag number, computer model, and serial number associated with a past purchase. Then the callers attempt to scam the customer into making a payment, installing questionable software, or taking some other potentially harmful action.

Recently, according to numerous social media posts such as this one, Dell notified an unspecified number of customers that names, physical addresses, and hardware and order information associated with previous purchases was somehow connected to an “incident involving a Dell portal, which contains a database with limited types of customer information.” The vague wording, which Dell is declining to elaborate on, appears to confirm an April 29 post by Daily Dark Web reporting the offer to sell purported personal information of 49 million people who bought Dell gear from 2017 to 2024.

The customer information affected is identical in both the Dell notification and the for-sale ad, which was posted to, and later removed from, Breach Forums, an online bazaar for people looking to buy or sell stolen data. The customer information stolen, according to both Dell and the ad, included:

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Researchers on Wednesday reported critical vulnerabilities in a widely used networking appliance that leaves some of the world’s biggest networks open to intrusion.

The vulnerabilities reside in BIG-IP Next Central Manager, a component in the latest generation of the BIG-IP line of appliances, which organizations use to manage traffic going into and out of their networks. Seattle-based F5, which sells the product, says its gear is used in 48 of the top 50 corporations as tracked by Fortune. F5 describes the Next Central Manager as a “single, centralized point of control” for managing entire fleets of BIG-IP appliances.

As devices performing load balancing, DDoS mitigation, and inspection and encryption of data entering and exiting large networks, BIG-IP gear sits at their perimeter and acts as a major pipeline to some of the most security-critical resources housed inside. Those characteristics have made BIG-IP appliances ideal for hacking. In 2021 and 2022, hackers actively compromised BIG-IP appliances by exploiting vulnerabilities carrying severity ratings of 9.8 out of 10.

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty)

After reversing its positioning on remote work, Dell is reportedly implementing new tracking techniques on May 13 to ensure its workers are following the company’s return-to-office (RTO) policy, The Register reported today, citing anonymous sources.

Dell has allowed people to work remotely for over 10 years. But in February, it issued an RTO mandate, and come May, most workers will be classified as either totally remote or hybrid. Starting this month, hybrid workers have to go into a Dell office at least 39 days per quarter. Fully remote workers, meanwhile, are ineligible for promotion, Business Insider reported in March.

Now The Register reports that Dell will track employees’ badge swipes and VPN connections to confirm that workers are in the office for a significant amount of time.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries. (credit: Onyx Industries)

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Dmitry Yuryevich Khoroshev, aka LockBitSupp (credit: UK National Crime Agency)

Since at least 2019, a shadowy figure hiding behind several pseudonyms has publicly gloated for extorting millions of dollars from thousands of victims he and his associates had hacked. Now, for the first time, “LockBitSupp” has been unmasked by an international law enforcement team, and a $10 million bounty has been placed for his arrest.

In an indictment unsealed Tuesday, US federal prosecutors unmasked the flamboyant persona as Dmitry Yuryevich Khoroshev, a 51-year-old Russian national. Prosecutors said that during his five years at the helm of LockBit—one of the most prolific ransomware groups—Khoroshev and his subordinates have extorted $500 million from some 2,500 victims, roughly 1,800 of which were located in the US. His cut of the revenue was allegedly about $100 million.

Damage in the billions of dollars

“Beyond ransom payments and demands, LockBit attacks also severely disrupted their victims’ operations, causing lost revenue and expenses associated with incident response and recovery,” federal prosecutors wrote. “With these losses included, LockBit caused damage around the world totaling billions of U.S. dollars. Moreover, the data Khoroshev and his LockBit affiliate co-conspirators stole—containing highly sensitive organizational and personal information—remained unsecure and compromised in perpetuity, notwithstanding Khoroshev’s and his co-conspirators’ false promises to the contrary.”

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models.

GPT-4 is a large language model (LLM) created by OpenAI that attempts to predict the most likely tokens (fragments of encoded data) in a sequence. It can be used to craft computer code and analyze information. When configured as a chatbot (like ChatGPT), GPT-4 can power AI assistants that converse in a human-like manner. Microsoft has a license to use the technology as part of a deal in exchange for large investments it has made in OpenAI.

According to the report, the new AI service (which does not yet publicly have a name) addresses a growing interest among intelligence agencies to use generative AI for processing classified data, while mitigating risks of data breaches or hacking attempts. ChatGPT normally  runs on cloud servers provided by Microsoft, which can introduce data leak and interception risks. Along those lines, the CIA announced its own plan to create a ChatGPT-like service last year, but this Microsoft effort reportedly stands as a separate project.

Read 4 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Researchers have devised an attack against nearly all virtual private network applications that forces them to send and receive some or all traffic outside of the encrypted tunnel designed to protect it from snooping or tampering.

TunnelVision, as the researchers have named their attack, largely negates the entire purpose and selling point of VPNs, which is to encapsulate incoming and outgoing Internet traffic in an encrypted tunnel and to cloak the user’s IP address. The researchers believe it affects all VPN applications when they’re connected to a hostile network and that there are no ways to prevent such attacks except when the user VPN runs on Linux or Android. They also said their attack technique may have been possible since 2002 and may already have been discovered and used in the wild since then.

Reading, dropping, or modifying VPN traffic

The effect of TunnelVision is “the victim’s traffic is now decloaked and being routed through the attacker directly,” a video demonstration explained. “The attacker can read, drop or modify the leaked traffic and the victim maintains their connection to both the VPN and the Internet.”

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / Mustafa Suleyman, co-founder and chief executive officer of Inflection AI UK Ltd., during a town hall on day two of the World Economic Forum (WEF) in Davos, Switzerland, on Wednesday, Jan. 17, 2024. Suleyman joined Microsoft in March. (credit: Getty Images)

Microsoft is working on a new large-scale AI language model called MAI-1, which could potentially rival state-of-the-art models from Google, Anthropic, and OpenAI, according to a report by The Information. This marks the first time Microsoft has developed an in-house AI model of this magnitude since investing over $10 billion in OpenAI for the rights to reuse the startup’s AI models. OpenAI’s GPT-4 powers not only ChatGPT but also Microsoft Copilot.

The development of MAI-1 is being led by Mustafa Suleyman, the former Google AI leader who recently served as CEO of the AI startup Inflection before Microsoft acquired the majority of the startup’s staff and intellectual property for $650 million in March. Although MAI-1 may build on techniques brought over by former Inflection staff, it is reportedly an entirely new large language model (LLM), as confirmed by two Microsoft employees familiar with the project.

With approximately 500 billion parameters, MAI-1 will be significantly larger than Microsoft’s previous open source models (such as Phi-3, which we covered last month), requiring more computing power and training data. This reportedly places MAI-1 in a similar league as OpenAI’s GPT-4, which is rumored to have over 1 trillion parameters (in a mixture-of-experts configuration) and well above smaller models like Meta and Mistral’s 70 billion parameter models.

Read 3 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail