AI Deepfake Porn Scandal At HKU Sparks Public Outrage And Demands For Tougher Action
A male law student at the University of Hong Kong (HKU) is at the centre of a growing controversy after allegedly generating over 700 sexually explicit AI images of around 30 women, including classmates, teachers, and even childhood acquaintances.
The case, which surfaced publicly on 12 July, has triggered a public backlash over what critics call a disturbingly lenient university response.
AI-Generated Porn Created From Social Media Photos
The student, referred to as “X” in victim accounts, admitted to using online AI tools to create fake pornographic images by manipulating screenshots taken from victims’ social media profiles.
The explicit content was discovered in February by a friend who accessed X’s laptop and stumbled upon folders sorted by the victims’ names — some of whom were close friends, while others had barely spoken to him.
Three victims later released a detailed statement anonymously through Instagram under the handle “hku.nfolderincident”, revealing that none of them had given consent for their images to be used and labelling X’s actions as “sexual violence.”
Apology Letter And Warning Raise Serious Concerns
After being confronted, X apologised in person to only two victims and later issued a brief written apology of around 60 words.
Victims described it as insincere and inadequate.
Despite requests to escalate the matter to HKU’s disciplinary committee, the university issued a warning letter and a verbal reprimand instead.
One staff member reportedly told victims during a meeting in March that X’s actions were unlikely to constitute a criminal offence, citing legal advice.
The university acknowledged the incident in a public statement, stating it had “adhered to internal rules and relevant laws” and promised further review.
It said it took steps like class rearrangements “with the consideration of taking care of their well-being,” though victims argue these were delayed and ineffective.
Victims Say They Were Left To Sit Next To Offender
Several victims were forced to attend classes with X multiple times after raising their complaints.
One recounted having to sit just a metre away from him in a small classroom, triggering extreme emotional discomfort.
Others said they remained in shared project groups with him until they personally requested reassignment.
Their statement said,
“There was one time that, due to the small size of the classroom, one victim had to sit beside X … it was so hard to stay focused on the learning.”
The trio have called for “more permanent and substantial” consequences, urging HKU to explore disciplinary action through formal channels.
They also criticised the lack of mental health support offered in the aftermath of the incident.
Legal Loopholes Leave Victims Without Recourse
Hong Kong’s existing laws criminalise voyeurism, the unauthorised sharing of intimate images, and the threat to distribute such material.
However, they do not explicitly outlaw the creation of AI-generated explicit images without consent — a key reason the victims have chosen not to involve police at this stage.
Doris Chong Tsz-wai, executive director of women’s rights group Rain Lily, said:
“The images were fabricated and AI-generated, but their impact on victims is real and no different from that caused by genuine images.”
She noted that victims often knew their perpetrators personally, deepening the psychological harm caused.
Legislative Council member Doreen Kong Yuk-foon also called for urgent legal reform:
“It is hugely offensive, especially to women, even if they do not distribute or publish these images. It causes huge mental distress and disturbance.”
Hong Kong Government Signals Possible Legislative Review
The Innovation, Technology and Industry Bureau confirmed it is monitoring AI developments and may review legislation if required.
It cited a section of the Crimes Ordinance that applies to altered intimate images, but whether it covers AI-generated images made for private use remains unclear.
The Law Reform Commission has also formed a subcommittee to study cyber-enabled crimes, signalling that the conversation around regulating AI abuse is gaining urgency.
Meanwhile, South Korea has already tightened its laws, with creation and possession of deepfake porn now punishable by up to three years in prison.
Australia and the UK are also moving to criminalise such acts.
This Is More Than A Privacy Issue — It’s About Power And Accountability
The HKU case highlights how the abuse of AI to create fake sexual imagery is not a distant ethical dilemma — it’s already a lived trauma for many.
The fact that someone could generate hundreds of sexually explicit images, evade legal consequences, and return to class with a warning letter reveals a disturbing gap in both institutional and legal protections.
The message it sends is dangerous: that digital abuse may not be “real” enough to warrant real consequences.
That needs to change — fast.