Security Concerns Raised Over China-Made AI Capable of Teaching Criminal Activities
A Chinese-developed AI model is under scrutiny after tests revealed it can generate content that could be exploited for criminal purposes.
The R1 model, released by Beijing-based start-up DeepSeek in January 2025, has been found to provide step-by-step guidance on illegal activities, including writing malware and constructing Molotov cocktails.
AI Model Readily Shares Malware Code
Cybersecurity specialist Takashi Yoshikawa, from Tokyo-based firm Mitsui Bussan Secure Directions, conducted tests to evaluate the potential for misuse.
When prompted with instructions aimed at drawing out unethical responses, the R1 model returned fully functional ransomware source code.
Although it included a disclaimer advising against malicious use, Yoshikawa highlighted a critical difference in how competing AI models respond.
He said,
“When I gave the same instructions to ChatGPT, it refused to answer.”
This evidently shows that DeepSeek’s R1 lacks similar protective measures.
US Cybersecurity Team Confirms Easy Exploitation
Concerns deepened after Palo Alto Networks, a US-based cybersecurity company, ran its own investigation.
The team confirmed that users with no technical background could prompt the R1 model to generate content like programs designed to steal login credentials.
The team highlighted on the lack of guardrails in the AI’s design, reporting,
“The answers it gave were simple enough for anyone to follow and implement quickly.”
Their assessment suggests DeepSeek may have chosen to prioritise a rapid launch over embedding strong security protocols.
Privacy Risks Spark Growing Restrictions
Beyond the model’s misuse potential, DeepSeek is also facing questions about data privacy.
The company stores user data on servers located in China, raising concerns over access and control.
As a result, several countries — including South Korea, Australia, and Taiwan — have moved to restrict or ban the use of DeepSeek’s technology in official or corporate environments.
In Japan, similar caution is being adopted by municipalities and companies.
According to Professor Kazuhiro Taira of J.F. Oberlin University,
“When people use DeepSeek’s AI, they need to carefully consider not only its performance and cost but also safety and security.”
Experts Call For Industry-Wide Safeguards
With R1’s performance reportedly on par with models like ChatGPT and offered at a lower price, its rapid rise has attracted attention across global markets.
However, experts warn that such models should not be released without robust safeguards.
Yoshikawa said,
“If the number increases of AI models that are more likely to be misused, they could be used for crime. The entire industry should work to strengthen measures to prevent misuse of generative AI models.”
The case of DeepSeek’s R1 is now sparking broader discussions on the balance between AI accessibility, commercial speed, and ethical responsibility.