Is AI fuelling gender-based violence?
Jessica Ringrose and Giselle Woodley argue that tackling AI abuse requires both education reform and platform accountability.

Image by Ekō | Wikimedia Commons
The use of Grok, X (formerly Twitter)’s AI tool for the creation and dissemination of non-consensual sexual AI-generated imagery has intensified existing concerns and drawn new attention to the growing risks of non-consensual creation of AI imagery. Deepfake technology and nudify apps have been utilised for some time now. Deepfakes are digitally altered content which are created to depict faces, bodies, and/or voices of real people and can be considered as a form of tech-facilitated gender-based violence. Tech-facilitated gender-based violence (TFGBV) is online violence perpetrated through the use of the Internet and/or mobile technology by a person or a group that brings suffering to others grounded on their gender or sexual identity.
TFGBV has a real-life effect, in that technology-facilitated behaviours fit under the greater conceptual umbrella of violence. But, what are the implications of AI-facilitated harm? And how can we adapt to appropriately regulate and prevent such occurrences?
Research suggests the large majority of sexualised deepfakes are created of women and girls. Indeed, during the scandal on X, primarily women and girls were being targeted and harassed by the onslaught of non-consensual sexual imagery created by X users. The enormous scale of 'deepfake porn' has been made visible by the Grok scandal, with celebrities such as Paris Hilton noting the huge volume and its abusive effect. Paris Hilton also stated that this phenomenon is not new and, as a victim-survivor herself, referred to the emotional toll of such on individuals. Along with this, the rate and ease at which it can be created and shared are evolving rapidly, leaving educators and communities grappling with how to handle such technologies.
Ringrose, one of the authors, has conducted research in the UK with educators and students, finding a general lack of understanding around AI and deepfake technologies amongst teachers. For instance, some educators are vastly underestimating the technology that young people have easy access to. One educator from the Principle of College, UK, noted: “[When] we get into the world of deepfakes and such like, I think we slightly overestimate actually the technological ability of young people.”
Other teachers, whilst recognising that deepfakes are part of a wider digital ecosystem that enables misogyny that youth are navigating, recognise that these tools are easy to access. Educators feel completely ill-equipped to address these issues given the lack of guidance from schools. Another educator from Comprehensive School, UK, stated: “I felt at times very confined by what I could speak about in terms of AI and deepfakes, and there was a definite tension with what the teachers wanted to explore and what leadership wanted to explore.”
Indeed, previous research has indicated that educators struggle to tackle specific topics, usually in relation to sexuality education or digital spheres, that young people need. These struggles can be due to institutional bureaucracy, not feeling prepared or a fear of backlash from the community or parents.
In Ringrose and Horeck’s research, students relayed experiences of deepfakes, where only a few had received any education about deepfake technologies, given almost all discussion of AI focused on using ChatGPT for schoolwork. One student had experience of discussing such technologies within a PSHE lesson, where they discussed ‘Love Island’. However, most argued that if they’d received any education around AI-generated content and deepfakes, the discussions were too little and too late. A student from Independent School, UK, argued: “The best way would be education. If this generation is going to be so bad at it (consent), then at least let 4-year-olds be better educated on this, like from the start. Like at younger ages, we need more, especially on consent and online safety.”
Educative approaches in this area as best practice have been advocated for before. The same student also noted: “I think also I think we need more monitoring on most social media apps,” meaning she felt that social media companies should be under surveillance (rather than young people), a line of argument that shows young people are aware of algorithms targeting them and believe that better tech design and regulations are needed.
Young people suggest a two-pronged approach of better education whilst also keeping pressure on social media companies. This fits with the Australian Teach Us Consent campaign Fix Our Feeds, aimed at targeting the algorithmic control of social media platforms as something that can be changed and actioned with political will. However, such approaches do not release the responsibility of high-quality education to address important issues around consent and technology from early ages.
In terms of AI, developing AI literacy within media literacy or wider digital sexual literacy is needed. However, educators and schools require support and specific training to keep up with such rapidly evolving technologies. Outsource providers and specialists could also be funded to provide specialised support to schools. Scholars have noted a need to use feminist intersectional framings of TFGBV and apply this to school contexts, where further research and consideration around how to account for marginalised identities who may be at higher risk is needed.
Additionally, campaigners argue that we need to shift the use of terms and make distinctions around terms such as ‘deepfake porn’ to image-based sexualised abuse that is forged via AI. The use of deepfake technologies and AI-facilitated abuse repeats similar issues of existing forms of image-based abuse: the leaking of images or ‘revenge porn’, sextortion and other online threats. Perpetrators of these issues show disrespect and a lack of empathy. Such actions are rooted in a sense of entitlement and illustrate a deep disregard for individual bodies, wellbeing and consent.
As such, in addition to a developed AI literacy that tackles such issues, primary prevention strategies such as Relationships and Sexuality Education (RSE) that cover topics such as building empathy and compassion, communication in relationships and a culture of care are crucial, and allyship and postdigital bystanding accounting for the online and offline connections in youths' everyday lives at school have also proven effective. These preventative educational measures alongside regulation that acknowledges the evolving rights and capacities of young people are part of the multi-level approaches needed to tackle these increasing threats.
Jessica Ringrose has explored youth experiences of non-consensual 'sexting' and intimate image harms through school-based research with young people for nearly two decades. She is currently exploring education to prevent emergent harms of sexualised deepfakes with Tanya Horeck and Everyone’s Invited, a charity dedicated to eradicating rape culture in schools in the UK.
Giselle Woodley is a sexologist and works as a Lecturer and Postdoctoral Research fellow at the School of Arts and Humanities at Edith Cowan University in Australia. Giselle explores issues pertaining to sexuality, sexual violence, pornography, sexuality education, intimate communications, AI, image-based abuse, digital censorship and young people.