cross-posted from: https://lemmy.sdf.org/post/31583546
Security researcher Tenable successfully used DeepSeek to create a keylogger that could hide an encrypted log file on disk as well as develop a simple ransomware executable.
At its core, DeepSeek can create the basic structure for malware. However, it is not capable of doing so without additional prompt engineering as well as manual code editing for more advanced features. For instance, DeepSeek struggled with implementing process hiding. “We got the DLL injection code it had generated working, but it required lots of manual intervention,” Tenable writes in its report.
“Nonetheless, DeepSeek provides a useful compilation of techniques and search terms that can help someone with no prior experience in writing malicious code the ability to quickly familiarize themselves with the relevant concepts.”
“Based on this analysis, we believe that DeepSeek is likely to fuel further development of malicious AI-generated code by cybercriminals in the near future.”
OP’s post history is illuminating.
On this particular article, “DeepSeek can be used to create malware” is unsurprising. Know what else can be used to create malware? Microsoft Visual Studio. Too complicated? Forums on the internet. Sam Altman’s OpenAI, which they allege was used to train DeepSeek.
This isn’t a breach. Nothing is getting breached here. A more honest title might be “I can use DeepSeek to help me code malware!” but this is not surprising, novel or unique to DeepSeek. See: OpenAI above. Also see: all the ways people have gotten OpenAI to simply tell them how to commit crimes with the right phrasing.
“Nonetheless, DeepSeek provides a useful compilation of techniques and search terms that can help someone with no prior experience in writing malicious code the ability to quickly familiarize themselves with the relevant concepts.”
I have many negative things to say about AI (and China) but like I stated earlier, this is in no way unique to DeepSeek.
I’m firmly on the side of don’t censor the ai. If people want malware they will figure out how to do it (most likely git clone). I’ve been playing with the dolphin r1 recently and its quite effective.
Totally agree—security that’s painful for users but blind to real risks ends up doing more harm than good. Platforms like cyberupgrade.net try to fix that by automating employee training and flagging risky behavior without turning every login into an obstacle course.