Automating Image Abuse: Deepfake Bots on Telegram



This post was published as the summary of our latest intelligence report
written with Henry Ajder and Francesco Cavalli.


Today we go public with the findings of a new Sensity investigation into a newly uncovered deepfake ecosystem on the messaging platform Telegram. The focal point is an AI-powered bot that allows users to photo-realistically “strip naked” images of women, an evolution of the infamous DeepNude emerged in 2019. We collect the findings of our threat intelligence team into a new report that you can download here.

Compared to similar underground tools, the bot dramatically increases accessibility by providing a free and simple user interface that functions on smartphones as well as traditional computers. To “strip” an image, users simply upload a photo of a target to the bot and receive the processed image after a short generation process. Our investigation of this bot and its affiliated channels revealed several key findings:

These findings also allude to broader threats presented by the bot. Specifically, individuals’ “stripped” images can be shared in private or public channels beyond Telegram as part of public shaming or extortion-based attacks.

Given the sensitive nature of this investigation, we have omitted key information to protect victims and avoid publicizing identifying information for the bot and its surrounding ecosystem. All sensitive data discovered during the investigation detailed in this report has been disclosed with Telegram, VK, and relevant law enforcement authorities. We have received no response from Telegram or VK at the time of this report’s publication.

Update: Following our disclosure, the Italian Data Protection Authority has started an investigation on Telegram and will evaluate measures to contract the spread of illecit deepfake software online.


For full access to the report, click here.