
Hi Turn community members!
Do your users sometimes message you with requests for help on serious issues like sexual violence or suicidal thoughts - even if your service isn’t set up to help them? If you’ve been concerned about this and are at a loss as to how to address this important safeguarding challenge, the Safer Chatbots team would like to hear from you.
With the support of UNICEF, Girl Effect and Weni have developed an Artificial Intelligence (AI) model which improves the detection and response to messages covering 16 safeguarding topics including sexual violence, depression and abuse.
Put simply, by implementing this feature, if a user sends you a message like “He forced me to have sex”, your service will be able to detect it, rather than generating an error message, or worse, not responding at all. This can then trigger the (automated) compassionate, supportive response they deserve, and signpost them to appropriate services.
The model was created using training data (anonymised disclosure messages) collected from Girl Effect’s Big Sis chatbot, a service which provides information and advice to South African teenage girls on sexual & reproductive health and relationships. But we want the Safer Chatbots model to become a ‘digital global good’ - meaning it will be free to use and adapt by anyone.
To make this ambition a reality we are looking for partners with access to training data that could be incorporated into the NLP model to improve its global accuracy: If you have access to anonymised messages from users seeking help with a serious issue related to mental health, Gender Based Violence, and Abuse, and are interested in contributing to and benefiting from, the Safer Chatbots AI, please get in touch with, isabelle.amazon@gmail.com Karina.michel@girleffect.org and Yves.bastos@weni.ai
⚠️ Note that we are particularly interested to hear from partners in South Africa, or Southern African countries where English (however colloquialised) is the language your users communicate with you in.