Share the knowledge:

Catch up with the online safeguarding news stories for September, in easy to read bitesize news segments.

September – ‘Scroll Free Month’: Organised by The Royal Society for Public Health (RSPH), the public were invited to take a break from all personal social media accounts for 30 days. The RSPH said that by going Scroll Free for a month, it would give people a chance to reflect on their social media use. They pointed to their own 2017 report #StatusofMind , which highlighted “concerns about the potential impact of social media on mental health and wellbeing”. The report drew attention to the adverse effects of social media, such as negative body image, depression, cyberbullying, poor sleep and FOMO (fear of missing out).

3rd September – Google releases AI technology to combat child abuse images: The internet giant announced that they were going to use new artificial intelligence (AI) technology, to help identify online child sexual abuse images and material. Currently it is only possible to identify images that have already been classified by moderators, as child sexual abuse material (CSAM). This new tool will be able to recognise and identify new content posted by online offenders. It will also reduce human reviewers’ exposure to the content. Undoubtedly, this announcement by Google follows growing pressure by the UK government for internet and social media companies to do more to combat online child sexual abuse.

6th September – New safety features to combat ‘app addiction’: Succumbing to public and government pressure, Facebook, Instagram and Snapchat announced that were in the process of rolling out new safety features to help prevent app addiction. The tools will allow users to monitor and restrict the time they spend using the apps. These include reminders to ‘take a break’, the ability to set time limits and to also monitor app usage.

6th September – A ‘well-being’ guide from Instagram: Connected to the above story, this was obviously the day that someone at Instagram found the company’s moral compass hidden away in a broom cupboard. The social media giant also launched a guide for parents on how to talk to their children about their online activity. Topics include parent tips on how to discuss with children, their management of time on the app, privacy settings, how to filter or block out offensive comments and how to deal with bullying.

14th September – Instagram under fire for inappropriate use of hashtags: Staying with Instagram, but this time focussing on the negative side of the media platform. They came in for criticism over certain potentially harmful hashtags, after they failed to ensure that those hashtags carried with them appropriate pop-up health warnings. Instagram’s own safeguarding policy states that pop-ups containing warnings and advice should accompany any search terms related to sensitive issues. However, Sky News found several hashtags relating to “unhealthy and dangerous attitudes towards food and body image”, with no such warnings. In fairness to Instagram, they added the warnings as soon as they were notified, but it just highlights that their algorithms to detect inappropriate content, still fall woefully short.

21st September – Oh dear! Not Instagram again: An expose by the business news website ‘Business Insider’, cast the spotlight on Instagram’s new TV service, IGTV, the network’s rival to YouTube. Business Insider spent nearly three weeks monitoring the service and found that algorithms within IGTV’s content-recommendation machine, suggested “graphic and disturbing videos, whose content appeared to include child exploitation and genital mutilation”. Examples cited by Business Insider were:

• sexually suggestive footage of young girls (potentially 11 or 12 years old)
• a graphic video of a penis being mutilated
• a baby in distress lying on a floor and being touched by a monkey, whilst adults looked on and filmed the distressing scene
• a group of men deceiving a sex worker into thinking she was going to be arrested
• a video of a woman pulling something horrible and bloody out of her nose

Some of these recommendations came via the ‘For You’ section on a child account set up by Business Insider, which had no previous search activity on Instagram. In other words, these recommendations can pop up randomly for anyone, including children. Some of the comments attached to the videos were clearly predatory in nature.

Instagram’s reaction was less than encouraging. When Business Insider reported these videos via the network’s reporting tool, it took the network five days to remove them. They only did this when Business Insider contacted their press office and questioned why the content was still available online. At this stage, two of the reported videos had over 1 million views. The video of the baby was not removed as Instagram deemed that it did not breach their community guidelines.

21st September – More than 1,100 cases of child abuse appear on Kik over a 5-year period: Following a Freedom of Information request to UK police forces, the BBC identified that Kik had featured in more than 1,100 child sexual exploitation cases, within the last five years. The BBC article also highlighted the difficulties that police forces face when trying to get evidence and information from the teenage chat app, which has over 300 million users worldwide. One police officer described the process as a “bureaucratic nightmare” and highlighted that the delay in obtaining evidence from the networking platform, undoubtedly led to other children being groomed and exploited. Just under half of the police forces contacted under the FoI request did not respond. Therefore, that figure is likely to be a lot more.

21st September – Data leak at Twitter: Twitter revealed that they had notified users that some private messages had been compromised. Twitter would not disclose the number of users who had been compromised by the software bug, which leaked direct messages between users and businesses that offer customer services via Twitter. Apparently, this issue had first come to notice on May 2017 but had not been resolved since September 2018.

28th September – Data leak at Facebook: Facebook revealed that 50 million users had been affected by a security breach, allowing hackers access to user’s accounts, potentially stealing the personal information contained within them. However, this was about as far as the media giant would go when revealing the extent of the hack. They couldn’t or wouldn’t say whether any of the accounts had been misused or information accessed. An early Christmas from Facebook to those criminals who thrive on cyber fraud and identity theft.

28th September – Underhand tactics used by Facebook to target users with adverts: Facebook came under fire for gathering personal information on their users and using that data for customised advertising. No real surprises here. The research highlighted that Facebook also identified phone numbers and other personal details from non-users of the media platform who were also then targeted for advertising. In the past Facebook had blamed this on software bugs, but now they have come clean and admit it is, so they can give their users a more ‘personalised’ experience.


Safeguarding Hub

Safeguarding Hub

The Safeguarding Hub has been developed by Andy Passingham and Paul Maslin as a way of sharing information relating to safeguarding children and vulnerable adults. This website and the articles produced by Andy and Paul have been created in their own time outside of their current police roles.

Share the knowledge:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.