Category:

Editor’s Pick

Enlarge (credit: Getty Images)

On Thursday, Baltimore County Police arrested Pikesville High School’s former athletic director, Dazhon Darien, and charged him with using AI to impersonate Principal Eric Eiswert, according to a report by The Baltimore Banner. Police say Darien used AI voice synthesis software to simulate Eiswert’s voice, leading the public to believe the principal made racist and antisemitic comments.

The audio clip, posted on a popular Instagram account, contained offensive remarks about “ungrateful Black kids” and their academic performance, as well as a threat to “join the other side” if the speaker received one more complaint from “one more Jew in this community.” The recording also mentioned names of staff members, including Darien’s nickname “DJ,” suggesting they should not have been hired or should be removed “one way or another.”

The comments led to significant uproar from students, faculty, and the wider community, many of whom initially believed the principal had actually made the comments. A Pikesville High School teacher named Shaena Ravenell reportedly played a large role in disseminating the audio. While she has not been charged, police indicated that she forwarded the controversial email to a student known for their ability to quickly spread information through social media. This student then escalated the audio’s reach, which included sharing it with the media and the NAACP.

Read 5 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hackers backed by a powerful nation-state have been exploiting two zero-day vulnerabilities in Cisco firewalls in a five-month-long campaign that breaks into government networks around the world, researchers reported Wednesday.

The attacks against Cisco’s Adaptive Security Appliances firewalls are the latest in a rash of network compromises that target firewalls, VPNs, and network-perimeter devices, which are designed to provide a moated gate of sorts that keeps remote hackers out. Over the past 18 months, threat actors—mainly backed by the Chinese government—have turned this security paradigm on its head in attacks that exploit previously unknown vulnerabilities in security appliances from the likes of Ivanti, Atlassian, Citrix, and Progress. These devices are ideal targets because they sit at the edge of a network, provide a direct pipeline to its most sensitive resources, and interact with virtually all incoming communications.

Cisco ASA likely one of several targets

On Wednesday, it was Cisco’s turn to warn that its ASA products have received such treatment. Since November, a previously unknown actor tracked as UAT4356 by Cisco and STORM-1849 by Microsoft has been exploiting two zero-days in attacks that go on to install two pieces of never-before-seen malware, researchers with Cisco’s Talos security team said. Notable traits in the attacks include:

Read 12 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.

The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.

In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:

Read 9 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Hackers abused an antivirus service for five years in order to infect end users with malware. The attack worked because the service delivered updates over HTTP, a protocol vulnerable to attacks that corrupt or tamper with data as it travels over the Internet.

The unknown hackers, who may have ties to the North Korean government, pulled off this feat by performing a man-in-the-middle (MiitM) attack that replaced the genuine update with a file that installed an advanced backdoor instead, said researchers from security firm Avast today.

eScan, an AV service headquartered in India, has delivered updates over HTTP since at least 2019, Avast researchers reported. This protocol presented a valuable opportunity for installing the malware, which is tracked in security circles under the name GuptiMiner.

Read 10 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

On Tuesday, Microsoft announced a new, freely available lightweight AI language model named Phi-3-mini, which is simpler and less expensive to operate than traditional large language models (LLMs) like OpenAI’s GPT-4 Turbo. Its small size is ideal for running locally, which could bring an AI model of similar capability to the free version of ChatGPT to a smartphone without needing an Internet connection to run it.

The AI field typically measures AI language model size by parameter count. Parameters are numerical values in a neural network that determine how the language model processes and generates text. They are learned during training on large datasets and essentially encode the model’s knowledge into quantified form. More parameters generally allow the model to capture more nuanced and complex language-generation capabilities but also require more computational resources to train and run.

Some of the largest language models today, like Google’s PaLM 2, have hundreds of billions of parameters. OpenAI’s GPT-4 is rumored to have over a trillion parameters but spread over eight 220-billion parameter models in a mixture-of-experts configuration. Both models require heavy-duty data center GPUs (and supporting systems) to run properly.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented backdoor, the software maker disclosed Monday.

When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company’s advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days. Microsoft said at the time that it learned of the vulnerability from the US National Security Agency.

Read 7 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / A sample image from Microsoft for “VASA-1: Lifelike Audio-Driven Talking Faces
Generated in Real Time.” (credit: Microsoft)

On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

“It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors,” reads the abstract of the accompanying research paper titled, “VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time.” It’s the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo.

The VASA framework (short for “Visual Affective Skills Animator”) uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images | Benj Edwards)

On Thursday, Meta unveiled early versions of its Llama 3 open-weights AI model that can be used to power text composition, code generation, or chatbots. It also announced that its Meta AI Assistant is now available on a website and is going to be integrated into its major social media apps, intensifying the company’s efforts to position its products against other AI assistants like OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini.

Like its predecessor, Llama 2, Llama 3 is notable for being a freely available, open-weights large language model (LLM) provided by a major AI company. Llama 3 technically does not quality as “open source” because that term has a specific meaning in software (as we have mentioned in other coverage), and the industry has not yet settled on terminology for AI model releases that ship either code or weights with restrictions (you can read Llama 3’s license here) or that ship without providing training data. We typically call these releases “open weights” instead.

At the moment, Llama 3 is available in two parameter sizes: 8 billion (8B) and 70 billion (70B), both of which are available as free downloads through Meta’s website with a sign-up. Llama 3 comes in two versions: pre-trained (basically the raw, next-token-prediction model) and instruction-tuned (fine-tuned to follow user instructions). Each has a 8,192 token context limit.

Read 8 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge (credit: Getty Images)

Password-manager LastPass users were recently targeted by a convincing phishing campaign that used a combination of email, SMS, and voice calls to trick targets into divulging their master passwords, company officials said.

The attackers used an advanced phishing-as-a-service kit discovered in February by researchers from mobile security firm Lookout. Dubbed CryptoChameleon for its focus on cryptocurrency accounts, the kit provides all the resources needed to trick even relatively savvy people into believing the communications are legitimate. Elements include high-quality URLs, a counterfeit single sign-on page for the service the target is using, and everything needed to make voice calls or send emails or texts in real time as targets are visiting a fake site. The end-to-end service can also bypass multi-factor authentication in the event a target is using the protection.

LastPass in the crosshairs

Lookout said that LastPass was one of dozens of sensitive services or sites CryptoChameleon was configured to spoof. Others targeted included the Federal Communications Commission, Coinbase and other cryptocurrency exchanges, and email, password management, and single sign-on services including Okta, iCloud, and Outlook. When Lookout researchers accessed a database one CryptoChameleon subscriber used, they found that a high percentage of the contents collected in the scams appeared to be legitimate email addresses, passwords, one-time-password tokens, password reset URLs, and photos of driver’s licenses. Typically, such databases are filled with junk entries.

Read 11 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail

Enlarge / An AI-generated image from DALL-E 2 created with the prompt “A painting by Grant Wood of an astronaut couple, american gothic style.” (credit: AI Pictures That Go Hard / X)

When OpenAI’s DALL-E 2 debuted on April 6, 2022, the idea that a computer could create relatively photorealistic images on demand based on just text descriptions caught a lot of people off guard. The launch began an innovative and tumultuous period in AI history, marked by a sense of wonder and a polarizing ethical debate that reverberates in the AI space to this day.

Last week, OpenAI turned off the ability for new customers to purchase generation credits for the web version of DALL-E 2, effectively killing it. From a technological point of view, it’s not too surprising that OpenAI recently began winding down support for the service. The 2-year-old image generation model was groundbreaking for its time, but it has since been surpassed by DALL-E 3’s higher level of detail, and OpenAI has recently begun rolling out DALL-E 3 editing capabilities.

But for a tight-knit group of artists and tech enthusiasts who were there at the start of DALL-E 2, the service’s sunset marks the bittersweet end of a period where AI technology briefly felt like a magical portal to boundless creativity. “The arrival of DALL-E 2 was truly mind-blowing,” illustrator Douglas Bonneville told Ars in an interview. “There was an exhilarating sense of unlimited freedom in those first days that we all suspected AI was going to unleash. It felt like a liberation from something into something else, but it was never clear exactly what.”

Read 42 remaining paragraphs | Comments

0 comment
0 FacebookTwitterPinterestEmail