Cybersecurity researcher Jeremiah Fowler discovered an unsecured database belonging to GenNomis by AI-NOMIS, a South Korean AI company specializing in face-swapping and AI-generated explicit content.

vpnMentor in a report said the database contained 93,485 images and JSON files totaling 47.8 GB, exposing internal records without password protection or encryption. A sample of the database revealed disturbing AI-generated explicit images, including portrayals of young individuals.
The database logged command prompts and image links but did not contain personally identifiable information (PII). However, its exposure raised concerns about the ethical and legal implications of AI-generated deepfake pornography.
Jeremiah Fowler reported the breach to GenNomis and AI-NOMIS, leading to the restriction of public access, though he received no response from the company. It remains unclear how long the database was exposed or whether third parties accessed the data before its restriction.
GenNomis enables users to create AI-generated images across 45 art styles, offering a marketplace for image sales. Many of the images stored in the database were explicit, raising serious concerns about privacy, consent, and the potential for misuse, such as extortion and reputational harm. The platform’s face-swapping capabilities further heightened concerns about non-consensual deepfake content.
Non-consensual AI-generated pornography is a growing issue, with estimates suggesting that 96 percent of deepfakes online are pornographic and 99 percent involve women without their consent. Fowler noted that before his disclosure, a “Face Swap” folder disappeared from the database, and shortly after, the websites of GenNomis and AI-NOMIS went offline, with the database deleted.
Among the exposed images were AI-generated explicit depictions of celebrities, including Ariana Grande, the Kardashians, Beyoncé, Michelle Obama, and Kristen Stewart, sometimes portraying them as children. Jeremiah Fowler, adhering to ethical research standards, refrained from downloading or capturing the illicit content. He has previously reported a similar case to the FBI, emphasizing the need for law enforcement intervention in such breaches.
This incident highlights the growing risks of unregulated AI image generation and the potential for serious harm. It calls for immediate industry-wide safeguards to prevent the misuse of AI-generated deepfake technology and to ensure ethical AI development.
Law enforcement agencies worldwide are increasingly addressing the dangers posed by AI-generated content, particularly in cases involving child sexual abuse materials (CSAM) and other criminal activities. In March 2025, Australian authorities arrested two individuals as part of Operation Cumberland, an international effort led by Denmark, Europol, and 18 other nations, resulting in the apprehension of 23 additional suspects. Similarly, a South Korean court sentenced a perpetrator of deepfake sex crimes to ten years in prison in October 2024, and a U.S. teacher was arrested in March 2025 for using AI to create fake pornographic videos of students.
Platforms like GenNomis claim to prohibit illegal content, but concerns remain about the enforcement of these policies, as prohibited AI-generated images were reportedly stored in a publicly exposed database. The psychological impact of AI-generated sexual exploitation is severe, with cases of sextortion leading to tragic suicides. Victims are encouraged to report such incidents to law enforcement to seek justice and removal of harmful images.
In the U.S., the bipartisan “Take It Down Act” seeks to criminalize the distribution of non-consensual intimate images, including AI-generated content. Advances in law enforcement technology are making it increasingly possible to track down and prosecute offenders.
Baburajan Kizhakedath