A Safeguarding Hub – 12-minute briefing
In December last year, the BBC Technology news department broke a story titled “Child advice chatbots fail to spot sexual abuse”. The feature described how two mental health chatbot apps had required updates after they had failed to deal adequately with reports of child sexual abuse. In simplistic terms chatbots are computer programs that stimulate interactive conversation with human users. The two chatbots named in the BBC report were Woebot and Wysa.
The makers of Woebot describe their chatbot as an automated conversational agent which helps the user monitor their mood and learn about themselves. Basically, what they offer is Cognitive Behaviour Therapy (CBT) through automated chat. Users are able to engage in daily therapeutic conversations, talk about their mental health and wellbeing. It deals with topics such as depression, grief, addiction and relationship issues. They provide self-help tools and videos, depending on what the users is chatting about. Woebot is currently a free service available on Facebook Messenger, iPhones, iPads and Android devices.
Wysa is another CBT based chatbot, described as an “emotionally intelligent virtual coach”, responding to the emotions of its users. In effect it is a wellbeing life coach, offering tips on meditation, breathing, yoga, motivational interviewing and more. Wysa is also a free service available, but if a user chooses to upgrade and wants to begin messaging a Wysa Coach, then this costs a monthly fee of $29.99. According to the BBC Wysa had previously been recommended by one NHS Mental Health trust as an effective tool for helping young people. It is also available on Facebook Messenger, iOS 8.0 (or later) and Android devices.
It is not known what prompted the BBC to test both chatbots, but the results were not favourable to either applications. In a series of tests, both chatbots failed to signpost apparent and potential victims of sexual abuse to the appropriate support services and/or emergency help. Some of the responses were completely inappropriate, not only failing to deal with the issue the user was describing, but also providing rather bog-standard ‘robotic’ fluffy answers. The BBC also tested the chatbots on how they dealt with eating disorders and drug use. Both didn’t do too well on these issues either and the inappropriate responses detailed in the BBC article can be described as tactless and misleading. As the BBC point out, the automated systems in both chatbots failed to identify and deal appropriately with what were potentially dangerous situations.
Both the organisations behind the chatbots responded to the BBC expose. Wysa stated that they would introduce an update to improve the app’s responses, whilst Woebot makers introduced an 18+ age limit. However, the BBC article quoted the Children’s Commissioner for England, Anne Longfield, who was less than impressed. She said, “they should be able to recognise and flag for human intervention, a clear breach of law or safeguarding of children.”
Chatbots have been around for many years, but advances in artificial intelligence means that they are now becoming more prevalent in all areas of life, including business, retail, utility services, health, customer support, finance and education.
The online investments and finance website Investopedia.com, describes a chatbot as – “a computer program that simulates human conversation through voice commands or text chats or both”. In a nutshell, they are computer programs that provide a chat interface service designed to have either an auditory or textual conversation with an online user. Most are designed to replicate how we humans talk to each other in conversation. Many modern chatbots are now accessed through virtual assistants such as Alexa, Google Assistant, Siri or via messaging apps e.g. Facebook Messenger, WeChat, WhatsApp, LiveChat and Line. Other names for a chatbots include, Smartbot, Talkbot and Chatterbot.
There are two types of chatbots. Rule based chatbots which are limited, having been programmed to respond to only to very specific commands. Then there are chatbots which use artificial intelligence-machine learning. This chatbot will understand language(s) and is designed to enter into conversation with the user.
A study by the global research and advisory firm Gartner, suggests that chatbots will power 25% of all customer service interactions worldwide by 2020. In China, just one Chatbot has 20 million users. Chatbots are ideal for people that use messaging apps and the more people turn away from social media and to messaging services, the more chatbots are likely to proliferate. They are also relevant to a wide range of companies who use chat windows on their websites to interact with their customers. The types of companies that use Chatbots is diverse – airlines, insurance companies, banks, holiday companies, retailers, medicine and health care, restaurants, the list is substantial.
Woebot and Wysa – just a glitch?
Were the issues with Woebot and Wysa just a glitch or is there no place for Chatbots in safeguarding? Inevitably there has been some apprehension and criticism of chatbots. Some warn that technology is always suspectable to abuse. Fake chatbots are becoming widespread. Some of the concerns are:
- identity theft – sharing any personal information online leaves anyone vulnerable to identity theft, particularly if you are pouring your heart out online, telling a chatbot your intimate secrets. Can the chatbot be hacked – probably!
- credit card fraud – fake websites offering false chatbots services which request your financial details.
- malicious chatbots that direct the user to do an action that may result in mischief e.g. downloading malware.
- chatbots that are hacked and used to spread malice, messages of hatred or disseminate false information.
- cyberbullying via malicious chatbots
This is obviously a cause for concern for chatbots that have some sort of safeguarding responsibility to its users. However, many of these issues relate to fake chatbots and won’t occur if service users access creditable and secure chatbots. That still leaves the problem of chatbots like Wysa and Woebot who fail to identify and deal adequately with scenarios likely to affect vulnerable people. They have a certain responsibility to get it right. Wysa actually acknowledge that their platform is “great for those 4 am conversations” when someone is in “a dark place and have feelings we don’t share with anyone else”. The issue we have is that at 4am, when a person is in a dark place, this is exactly the time that you need to raise your game and get the advice spot on. Has a chatbot the capability to do that?
In fairness to these two particular chatbots, both applications do make it clear that they are not designed to deal with people in a crisis or an emergency. The BBC investigation identified that both bots did signpost to emergency services and helplines, when dealing with users who indicated self-harm and suicidal thoughts. However, as we know, they failed to deal with other major issues.
When asked to comment on the Wysa and Woebot expose, the Children’s Commissioner didn’t completely rule out chatbots as a safeguarding tool. What she actually said was that chatbots “were not currently fit for purpose” in relation to protecting young people. However, she is clearly open to their use in the future, given that her office currently supports the development of a chatbot by various responsible safeguarding organisations. That for us is the crux of the issue, if ill-thought out they may be dangerous, but used in a certain way and with the right handling, there are real possibilities for these bots to be used as a safeguarding tool.
Chatbots – a place in safeguarding
When describing their product, Woebot point out that more than half of the world’s population still don’t have access to basic health care and that this is an area where chatbots can fill a gap. They have a point. Using technology enables us to reach a wider and yet untapped audience. The same thing can be said of safeguarding. Police and social services are struggling to provide a quality service to the people that need our help, purely because of the volume and not helped by cuts in budgets. Our colleagues in the voluntary sector admirably support those efforts and plug the safeguarding gaps, but they too are reliant on funding. If we are in any doubt of the huge challenges, we face then we don’t have to look much further than the Children’s Commissioners Annual Vulnerability Report (2018).
- 710,000 children and young people 0-17 years in England receiving statutory support
- 1 million children and young people with complex family needs
- 570,000 children and young people in families receiving recognised support for complex family level need
- 6 million children in families with complex needs for which there is no national established, recognised form of support
Within these figures is a huge volume of children and vulnerable people that will be in need of safeguarding services. It would be remiss of us not to utilise advances in technology to help them, including the use of well thought out and responsible chatbots. Currently there are a number of voluntary safeguarding organisations looking to introduce bots into their work. One such project is the ‘Is It OK’ chatbot, being developed jointly between Childline and Missing People’s Runaway Helpline, and supported by the NSPCC, BBC Children in Need, the police and the Children’s Commissioner.
The ethos behind Is It Ok is that young people at risk of abuse and exploitation often do not seek help for a variety of reasons. The charities aim to utilise chatbot technology to connect with young people who would not ordinarily access their services. The difference between this project and Wysa/Woebot, is that a young person will be engaged in conversation with the bot via simple questions and answers. The chatbot will then decide which service or organisation is the most appropriate for the young person to be signposted to. This includes a human Childline counsellor, trained support worker at the Missing People’s Runaway Helpline, or if required the emergency services. The charities have acknowledged that the chatbot needs to be accessible through the favoured medium for young people, social media platforms. They have also ensured that in its development and delivery, they have consulted and obtained the views of young people. The pilot is due to be rolled out from March onwards.
Another interesting project involves the male suicide prevention charity ‘Campaign against Living Miserably (CALM)’, who in October last year were awarded a £300,000 to assist them in their proposal to develop a chatbot. Their aim is to prioritise calls and increase the number of enquiries they can handle. CALM were awarded the money after they were joint winners at the WCIT (Worshipful Company of Information Technologies) Charity IT Award 2018. The other winner was Missing People, who were also awarded £300,000 to develop their chatbot. WCIT have also pledged to aid more charities to develop artificial intelligence tools.
There are also plans by a company called Reason Digital to develop a chatbot that provides advice to children who may or have been groomed and sexually abused. Although we don’t know too much about this project, we understand that it will provide young people with advice around a safe place to go and get support from trained support workers. This chatbot is being developed with Sara Rowbottom, the former sexual health worker who helped expose the Rochdale child abuse scandal.
So, some exciting and innovative ideas to use chatbots as a tool to reach and provide support to more people requiring help and support. Developed and managed well, they may have real potential in safeguarding.
Thanks for reading