How Can AI Be Used to Track and Predict Gender-Based Violence?
Gender-Based Violence (GBV) refers to harmful acts directed at individuals because of their gender. It includes domestic abuse, sexual harassment, stalking, coercive control, online abuse, and economic violence. GBV is not just an isolated experience; it is a structural issue that disproportionately affects women, girls, and gender-diverse people across society.
In the UK, the scale of the crisis is overwhelming. In the year ending March 2022, 1.6 million women experienced domestic abuse, and research by Women’s Aid shows that one in four women will face domestic abuse in their lifetime. A UN Women UK study found that 97% of women aged 18 to 24 have been sexually harassed in public spaces. These figures remind us why understanding GBV patterns in real time is essential. This is where Artificial Intelligence (AI) has transformative potential.
AI allows us to integrate and analyse vast amounts of data that previously sat in separate, disconnected systems. When police records, hospital admissions, helpline transcripts, social media posts, and community reporting apps such as SafeCity UK are analysed together, a fuller and more accurate picture of GBV begins to emerge. Through this integrated data, AI can reveal geographic hotspots, highlight disparities across demographics, and show exactly where services are overstretched or where interventions are failing.
Natural Language Processing (NLP) adds another powerful dimension to understanding and tackling GBV. By analysing language in real time, AI can detect signs of verbal, psychological, or coercive abuse across digital platforms. This includes identifying misogynistic or threatening language on social media, supporting HR teams when reviewing workplace communications, or helping law enforcement assess escalating digital threats. Organisations such as Glitch UK already use AI-driven tools to monitor online abuse against women in public life, providing timely alerts and enabling real-time reporting.
AI-powered mapping takes this a step further by allowing incidents of harassment or abuse to be plotted anonymously on dynamic digital maps. These maps offer immediate insight into unsafe locations, help shape urban planning and police deployment strategies, and empower communities to raise awareness and advocate for change based on real, localised data.
The ability of AI not only to track but also to predict risk marks one of its most promising and life-saving contributions. Machine learning models trained on historical police and court data can forecast the likelihood of repeat offending. In 2025, the OECD trialled this approach with UK domestic violence charities, using data such as behavioural patterns and restraining order breaches to identify individuals most likely to reoffend. AI can also analyse thousands of risk factors - such as substance misuse, financial instability, and prior criminal history - to identify dangerous combinations that human analysts might overlook. This enables social workers to deliver targeted interventions, allows survivors to receive more personalised safety planning, and helps support services allocate resources more effectively.
Perhaps most urgently, recent research has shown that AI can help predict severe or fatal outcomes. Models trained on emergency call transcripts, restraining orders, and historical case data have been able to identify cases at high risk of escalating to femicide. A 2024 study from PsyPost found that AI could accurately flag cases most likely to result in fatal violence, offering a chance for earlier intervention where timing can be the difference between life and death.
As abuse increasingly shifts into digital spaces, AI is becoming indispensable in protecting people online. Algorithms now detect harassment, hate speech, doxxing, and threats on platforms like Instagram, TikTok, and X. AI can automatically block or flag harmful content, including coercive messages, unwanted sexual content, deepfakes, and revenge porn. In 2023, one in ten women in the UK reported receiving unwanted sexually explicit content online - an alarming figure that highlights how essential AI-driven digital protections have become.
AI is also emerging as a vital support tool in healthcare and frontline services. NHS trials led by the Faculty of Clinical Informatics are exploring how AI can recognise injury patterns associated with domestic abuse and flag repeated emergency room visits that may indicate hidden harm. Meanwhile, shelters and support organisations are piloting AI-powered chatbots that provide trauma-informed responses, recognise distress through tone and sentiment, and guide survivors to legal, housing, or emergency support at any hour of the day. By easing pressure on overwhelmed frontline teams, AI ensures survivors can access immediate help even when human support isn’t available.
Workplaces - where harassment often goes unreported - also stand to benefit from AI’s analytical power. By monitoring internal communication systems for coercive or discriminatory language, AI can reveal patterns that would otherwise remain hidden. It can log repeated unwanted contact without requiring a victim to come forward, and it provides anonymised insights to HR teams to drive earlier, safer, and more effective interventions.
While AI presents enormous opportunities, it is not a silver bullet. Its effectiveness depends on how ethically and thoughtfully it is designed and deployed. For AI to genuinely support prevention efforts, it must centre the experiences of survivors, protect privacy, and operate with transparency and accountability. It must also be built with equity at its core, reflecting the needs and realities of marginalised communities who face heightened risks of violence.
At Dawn Intelligence, we believe technology should play a proactive role in building safer futures. AI should not only respond to harm after it happens - it should help prevent it. By blending real-time data, predictive insights, and community-driven intelligence, we are working towards a future where gender-based violence is tracked, understood, and ultimately reduced before it has the chance to escalate.