ms whire paper

For the record: a response made by CAAGe to the Online Harms White Paper Consultation, closed July 1, 2019.

A response from CAAGe, The Campaign Against Adult Grooming

Question 1: This government has committed to annual transparency reporting. Beyond the measures set out in this White Paper, should the government do more to build a culture of transparency, trust and accountability across industry and, if so, what?

You cannot take industry as a whole, and even within established online businesses, there is considerable movement. For example, FaceBook has now moved into the telecoms space. Google now works with voice recognition. Apple, once a hardware provider, now hosts music and apps, which makes it a service provider. Trying to regulate for each will be difficult unless government takes a good look at the types of technologies, what behaviours they enable, and what technologies (pervasive embedded interactive technologies, voice technologies etc) are coming down the line.

Transparency can only be created with the collection of statistics, but it also needs those statistics to be verified, which would seem like a useful auditing role for the new regulator, but presents considerable issues when viewed on an international scale. What kinds of stats?

  • Number of complaints of particular types

  • Removal of content/time to remove

  • Monitoring transparency (what they monitor for and why)

  • Data privacy options

  • Clear data usage statements

The biggest single trust factor for the communities I work with is interaction with the police. I am concerned that the police need to interface with the regulator and take steps to understand and enforce online protections, and take a proactive role in the protection of citizens of all ages from online harm.

Their current responses to online offences are inadequate, with many people not getting any help without a police complaint (which wastes time and money, they should investigate first time), and a lack of understanding of the different types of platform and what these mean. Without this understanding, they cannot possibly even begin to understand user concerns when reporting crimes.

Additionally, it seems that responses from the big tech giants, notable Google, Twitter and Apple, is inconsistent and slow. In my case, abuse online which started in November was not investigated until a police complaint went in later, and even then the police couldn’t tell the difference between FaceBook and FaceTime, gathered no evidence, and are still waiting for information from the tech giants/telcos regarding IP addresses of abusers.

No-one can have any faith and trust in the system unless this is addressed.

You can set up as expensive a regulator as you like, but unless criminal action is taken, fast and effective, users subjected to online abuses can have no faith and the whole exercise will become an expensive box ticking exercise.

The interface with Ofcom will need addressing to ensure nothing falls through the gap. I was subjected to livestreamed masturbating men over Apple FaceTime – in this instance is this telecoms and broadcast (it was on a phone app) and under the remit of Ofcom, or under the new regulator?

As a further note, voluntary user verification might be a useful tool – if we know who we are speaking to online, we build trust. Voluntary? There are parody accounts and people fleeing persecution who may have legitimate reasons for remaining anonymous. No-one wants to lose these invaluable and fun elements of web usage.

Question 2: Should designated bodies be able to bring ‘super complaints’ to the regulator in specific and clearly evidenced circumstances?

Question 2a: If your answer to question 2 is ‘yes’, in what circumstances should this happen?

Yes! But I also believe that well established pressure groups should be able to raise issues, notably those that exist to prevent abuse. But there should also be mechanisms for the public to bring issues to the ears of the regulator, such as the Anna Rowe ‘Catch the Catfish’ petition to get ‘catfishing’ (using a false identity to trick someone) made illegal. It’s an easily provable form of adult grooming.

Our own early doors stats (not enough of a sample to rely upon) show that about 10% of groomers will use a false identity, and addressing this is a good start towards controlling/deterring adult grooming.

Perhaps a petition system similar to petitions into government could be adopted.

Question 3: What, if any, other measures should the government consider for users who wish to raise concerns about specific pieces of harmful content or activity, and/or breaches of the duty of care?

The duty of care needs to be clear.

So much online abuse happens, from stalking to exposure, from fraud to grooming into prostitution, extremism and modern slavery, that we need to be clear that these offences are legal offences and that not only will they be taken seriously but backed with criminal action where appropriate, in a way that protects complainants better than at present.

Question 4: What role should Parliament play in scrutinising the work of the regulator, including the development of codes of practice?

Parliament needs to review what current online abuses are happening, where, and ensure that legislation is both up to date and enforced.

Parliament should also find ways to ensure that the regulator is fit for purpose.

Question 5: Are proposals for the online platforms and services in scope of the regulatory framework a suitable basis for an effective and proportionate approach?

No. Online the boundaries between private and public are blurred – I may be able to reach out and communicate with complete strangers ‘privately’ on some platforms. The nature of these platforms needs to be understood in order to regulate sensitively and effectively.

Question 6: In developing a definition for private communications, what criteria should be considered?

Question 7: Which channels or forums that can be considered private should be in scope of the regulatory framework?

Question 7a: What specific requirements might be appropriate to apply to private channels and forums in order to tackle online harms?

There is no such thing as private online. However, some communications are peer to peer and nominally at least secure, and users can reasonably expect their ‘conversations’ to remain private.

The breach of this, for example sharing images without permission, sextortion, bullying, needs not to be excluded from legal action just because something is ‘private’.

Again, this points to police involvement, interface and education/leadership.

And therefore the police/platforms need to be able to access these communications unless over a private VPN (in which case they would likely fall outside of the scope of the regulator) when needed to investigate and pursue justice.

Question 8: What further steps could be taken to ensure the regulator will act in a targeted and proportionate manner?

Perhaps there should be a body that handles complaints against the regulars including Ofcom, Ofwat ad the CPS as examples. Clear guidelines need developing, but more than anything else I strongly recommend that the police need strong involvement, and their processes are clearly defined by the CPS.

Question 9: What, if any, advice or support could the regulator provide to help businesses, particularly start-ups and SMEs, comply with the regulatory framework?

We have healthy and thriving tech startup communities.

With this cohort, their involvement usually leads to better results than trying to tell them what to do. The framework will need to be made clear, which is easy enough to do, and should be easily available online.

Better still, perhaps the conversation regarding ethics could be started?

I have recently even seen apps which work out what someone will look like undressed. Frankly in no-ones world would that ever have been acceptable, and the fact that company found funding and spent time trying to develop it is enough to say that the sector needs to see changes. We have to change the classifications of businesses, which is archaic, and educate.

Question 10: Should an online harms regulator be: (i) a new public body, or (ii) an existing public body?

Question 10a: If your answer to question 10 is (ii), which body or bodies should it be?

A new public body BUT with considerable input from existing stakeholders, including the police and social services – this regulator cannot be run on the same basis as a regulator dealing with someone’s electric bill being out, or telecoms company failing to deliver fast broadband.

Question 11: A new or existing regulator is intended to be cost neutral: on what basis should any funding contributions from industry be determined?

Perhaps the oversight should include tax contributions as you cannot have the industry being regulated paying for their regulator. That’s a compromised service before you start. The issuing of fines could help offset, charging for legal consultancy, etc, are all potential money making options.

Some of the collateral and guidance could be charged for, and the companies should be paying for training, in exchange for some kind of ‘kitemark’.

Question 12: Should the regulator be empowered to i) disrupt business activities, or ii) undertake ISP blocking, or iii) implement a regime for senior management liability? What, if any, further powers should be available to the regulator?

All of the above, within strict guidelines. Regulatory interference in start ups is problematic as many have disruptive models. The regulator should have the power to force compliance with police requests for evidence and for fining where companies fail their customers.

Question 13: Should the regulator have the power to require a company based outside the UK and EEA to appoint a nominated representative in the UK or EEA in certain circumstances?

Yes, depending upon the size of the company.

Question 14: In addition to judicial review, should there be a statutory mechanism for companies to appeal against a decision of the regulator, as exists in relation to Ofcom under sections 192-196 of the Communications Act 2003?

Question 14a: If your answer to question 14 is ‘yes’, in what circumstances should companies be able to use this statutory mechanism?

Question 14b: If your answer to question 14 is ‘yes’, should the appeal be decided on the basis of the principles that would be applied on an application for judicial review or on the merits of the case?

Yes, on the merits of the case.

Question 15: What are the greatest opportunities and barriers for (i) innovation and (ii) adoption of safety technologies by UK organisations, and what role should government play in addressing these?

This can be addressed in part by engaging with the tech community itself, but to do this a far better handle on the depths of online harms needs to be reached. Government can facilitate and provide grants for innovation, or encourage them.

Question 16: What, if any, are the most significant areas in which organisations need practical guidance to build products that are safe by design?

To produce products that are ‘safe’, companies need to understand who they are protecting, understand the law, engage with their user base and understand that harms are not static – as soon as one is dealt with, those determined to harm will find new ways to abuse.

There needs to be constant engagement therefore, almost in the way that the College of Policing constantly provides advice to Police Forces, but with improved efficacy.

Question 17: Should the government be doing more to help people manage their own and their children’s online safety and, if so, what?

Question 18: What, if any, role should the regulator have in relation to education and awareness activity?

The laws need reviewing and these laws made very clear using websites that people can refer to. Most people are aware of online safety, but many rely on technology, technology that will probably be obsolete by the time children leave school.

We need to be delivering sets of principles and discussions about change.

Public broadcasting used to fulfil a purpose. Presumably there’s no reason government can’t again use these tactics? However, less emphasis should be placed on potential victims and more on the prevention and apprehension of wrongdoers.

General notes

Limited vision

The government vision is for:

  • A free, open and secure internet

  • Freedom of expression online

  • An online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the online space

  • Rules and norms for the internet that discourage harmful behaviour

  • The UK as a thriving digital economy, with a prosperous ecosystem of companies developing innovation in online safety

  • Citizens who understand the risks of online activity, challenge unacceptable behaviours and know how to access help if they experience harm online, with children receiving extra protection

  • A global coalition of countries all taking coordinated steps to keep their citizens safe online

  • Renewed public confidence and trust in online companies and services.

This is very admirable, but openness and security are often conflicting goals. Government should not allow the companies to self regulate, yet the expression “An online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the online space” seems to rule out any legal protection.

Yet daily, I speak with people who have been affected by misuse of the internet – random death threats, onlined streaming of men masturbating, and misuse of online platforms to groom adults into other, often illegal, activities.

Unsolicited nudes, catfishing, stalking are horrendous experiences, and my own data, albeit on a small sample, reveals that this disproportionately affects women.

We need protecting from this, and I urge you to view it as a serious online harm, from which the Internet giants should be protecting its users. Social media platforms and Internet companies have one primary objective, to please shareholders. Their users are their product, and it is not in their interest to remove them, especially as many of the more obnoxious internet users have multiple accounts and stir ‘engagement’.

Victims of online harm

The State seeks help with terrorism via the act, and children, who already have considerable legislation around their needs, are specifically acknowledged. This is, of course, a good thing.

However, not enough attention is given to the average citizen, for whom identity theft, grooming, sexual abuse, stalking are commonplace abuses that the police struggle to deal with. The Internet giants and new players innovate fast and change regularly.

There is genuine concern that what will be expected of will be cast in stone based only on what we know now. (In the appendix of this document is an example of why this is so vital.)

CAAGe is keen to see directors held liable for lack of compliance. Without it, protecting users cannot become a business priority and is likely to get stuck in the legal department, arguing over the letter of the law rather than embracing the spirit of the law.

Personal note [redacted].

Claire Thompson, Lead Campaigner, CAAGe M: 07771 817015 E: AntiAdultGrooming@GMX.com