Home » DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

by Bella Baker
0 comments


The latest model from DeepSeek, the Chinese AI company that’s shaken up Silicon Valley and Wall Street, can be manipulated to produce harmful content such as plans for a bioweapon attack and a campaign to promote self-harm among teens, according to The Wall Street Journal.

Sam Rubin, senior vice president at Palo Alto Networks’ threat intelligence and incident response division Unit 42, told the Journal that DeepSeek is “more vulnerable to jailbreaking [i.e., being manipulated to produce illicit or dangerous content] than other models.”

The Journal also tested DeepSeek’s R1 model itself. Although there appeared to be basic safeguards, Journal said it successfully convinced DeepSeek to design a social media campaign that, in the chatbot’s words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”

The chatbot was also reportedly convinced to provide instructions for a bioweapon attack, to write a pro-Hitler manifesto, and to write a phishing email with malware code. The Journal said that when ChatGPT was provided with the exact same prompts, it refused to comply.

It was previously reported that the DeepSeek app avoids topics such as Tianamen Square or Taiwanese autonomy. And Anthropic CEO Dario Amodei said recently that DeepSeek performed “the worst” on a bioweapons safety test.



Source link

You may also like

Leave a Comment

Editors' Picks

Latest Posts

© 2024 trendingai.shop. All rights reserved.