Victims warn: Hong Kong unprepared for AI porn menace

Victims warn: Hong Kong unprepared for AI porn menace

AI‑Generated Sexual Violence Shocked Hong Kong University

Three Women Exposed the Deepfake Scandal

In early January, a classmate of a Hong Kong law student discovered hundreds of AI‑made images of at least 20 women on a laptop. The discovery led three women—“C,” “B,” and “A”—to publicly accuse the perpetrator and ignite a debate about digital sexual harassment.

Victims’ Emotional Toll

  • C described the ordeal as a “wound that will leave a scar,” expressing shock that quickly turned to panic.
  • B spoke of betrayal, saying she “couldn’t trust the people around her” after the alleged friend created the images.
  • A felt the university’s response of a warning letter and a forced apology “was ridiculous,” noting a failure to involve a disciplinary committee.

University’s Initial Response

HKU issued a warning letter and requested an apology, but denied further comment as the case remains under active review. The Equal Opportunities Commission has since opened a related complaint, and the privacy watchdog launched a criminal investigation.

Prior Cases Illustrate Wider Problem

Hong Kong’s Association Concerning Sexual Violence Against Women reported 11 cases of deepfake pornography in 2024-25, noting an uptick and the risk that victims may remain hidden if they do not know how to seek help.

  • Janice, a pseudonymous victim, has been “devastated a few years ago” by fake obscene images sent to her friends, fearing the damage “will never end.”
  • She expressed suicidal thoughts, “no sleep,” and “afraid the whole internet would fill with pornographic images of me.”

Legal Landscape and Gender‑Based Violence

Approximately 90 % of AI‑made porn victims are women—a figure highlighted by Susanne Choi of the Chinese University of Hong Kong. She argues it constitutes gender‑based sexual violence and calls for lawmakers and universities to “expand and revise existing laws and procedures” to address tech‑facilitated harassment.

Hong Kong currently criminalises the distribution of intimate images, including AI‑generated ones, but not their creation or possession. This gap complicated the legal actions against the unnamed perpetrator involved in the HKU scandal. However, after the women went public, authorities began a criminal probe.

Impact on Students’ Dignity and Public Reception

Once an avid social media user, C temporarily ceased posting due to fears of “screen‑grabbing” her photos for unknown purposes. She experienced self‑doubt after receiving comments that urged her to “apologise to him.”

Both B and A emphasised the importance of holding offenders accountable, claiming the mere creation of AI porn undermines bodily autonomy, privacy, and dignity. They urged that a clear line must be drawn at the creation stage.

Conclusion

The HKU deepfake scandal has spotlighted a rapidly growing threat of AI‑generated porn. It exposes gaps in Hong Kong’s legal framework, reveals the emotional toll on victims, and underscores the need for comprehensive measures to protect privacy and gender‑based rights in the digital age.