The plot could easily have been from a Netflix dystopian thriller.

“The Russian Guy” approaches ordinary women and girls leading ordinary lives, one walking her child to school, another stocking supermarket shelves, and a young girl walking by.

“Hello. What’s your name? You look beautiful. Come with me.”

He has sex with them in his rental apartment. The entire encounter is captured on video through his Ray-Ban META smart glasses. He posts the videos on his private Telegram account and porn sites for paying followers, while shorter clips have gone viral on TikTok and Instagram. Rumors swirl that he is HIV positive. Evidence indicates that hundreds of women and young girls across Kenya, Ghana, and South Africa have fallen prey.

The spectrum of technology-facilitated gender-based violence has fanned out yet again into an even more grotesque form.

First, it was humans operating in the dark web. As the range of tech tools expanded, so did new ideas emerge on how to amplify violations against women and girls. Grok, the AI chatbot developed by Elon Musk’s xAI, enabled image-editing to create non-consensual sexualized images of women. With “the Russian Guy,” otherwise known as “Yaytseslav,” AI-powered wearables in the form of eyewear with hidden cameras have raised the stakes. His prey were unaware that the blinking light at the corner of the frames was a recording camera paired to a smartphone.

The gender AI divide – inequalities between women and men related to access, use, development, and representation in AI technologies – one that is wider for less tech-savvy women, implies that the room for AI-amplified violations can only expand.

Yaytseslav’s case is also a cautionary tale on AI ethics and regulation. Despite public outrage in Ghana and Kenya and calls for his arrest, he remains active, posting content on social media.

The Cyber Security Authority in Ghana invoked Section 67 of the Cybersecurity Act, which criminalizes the non-consensual recording, publication, or distribution of intimate images or videos. By then, he had travelled to Kenya, where the Computer Misuse and Cybercrimes Act 2018 provides similar grounds.

Legislation is of little value, however, when perpetrators can move freely between jurisdictions. The scale of Yaytseslav’s actions might have been reduced if comprehensive, regional and/or international AI regulatory frameworks had been in place to serve as a deterrent.

That the events are presently unfolding in the Silicon Savannah in Kenya, a country that is one of Africa’s frontier tech nations and one of the region’s top AI policymakers, is alarming.

Regulation must cover AI-amplified gendered violence, exclusion, exploitation and misogyny, beyond the often-touted economic benefits. Rather than a patchwork of dispersed national laws, standards must be harmonized across the region. They must be enforced by a powerful regulator able to stand up to the Global North’s big tech firms on all facets of technology.

Photo: Ground Pictures and People Images composition