UK Hasn’t Started Age Verification, but it Has Produced Many Reports
LONDON – If you’ve been keeping an eye on the news about the UK government’s pending enforcement of the age-verification protocols required under the Digital Economy Act, then you’ve heard that enforcement has been delayed again.
While the again-delayed launch date for enforcement hasn’t been announced, this doesn’t mean the UK has been idle in the interim. To the contrary, the government’s bureaucrats appear to have been working overtime, churning out white papers about “online harms;” composing a “Digital Charter” detailing how the government intends for the UK to “lead the world in innovation-friendly regulation that encourages the tech sector and provides stability for businesses;” and cooking up a “Code of Practice for providers of online social media platforms.”
Like most proposals which seek to reduce harm, the text of these documents, codes and policies often sound reasonable enough on their face. But, just like the delayed age-verification protocol, these documents are also a bit light on the sort of detail which would explain precisely how their stated goals can be achieved in the online environment.
In its Digital Charter, the government states that the internet “is a powerful force for good,” one that “serves humanity, spreads ideas and enhances freedom and opportunity across the world.”
If you feel a “but…” coming, you will not be disappointed.
“As well as opportunities, new technologies have brought new challenges and risks,” the Charter continues. “The internet can be used to spread terrorist material; it can be a tool for abuse and bullying; and it can be used to undermine civil discourse, objective news and intellectual property.”
These things are all true, of course. And I suspect most people who aren’t trolls, terrorists and/or criminals would agree they’re all bad things – except the part about undermining of intellectual property, of course, since many consumers of pirated content think of consuming that content as one of their inalienable human rights.
“Government must lead the way in tackling these challenges,” the Charter states. “Our starting point is that we will have the same rights and expect the same behaviour online as we do offline.”
The UK government can expect anything it likes, of course; getting the results it seeks is a whole other kettle of very unlikely fish.
“We will take action to ensure that the internet and new technologies are not only safe and secure, but also that they are developed and used responsibly and ethically, with users’ interests at their heart. And we will ensure that businesses can compete on a level playing field and digital markets deliver the best outcomes for consumers.”
Again, the above sounds good. But when as I consider the practical application required to reach its professed goals, I begin to wonder if we’re looking at the seeds of another “Great Firewall” like the one employed by the Chinese government, or other nations which tightly regulate online content.
In a subsequent section of the Charter, the government rather casually references the establishment of a new “independent regulator.”
“To support our ambition for the UK to be the safest place in the world to be online, we will… introduce a new statutory duty of care, which will be overseen and enforced by an independent regulator, as set out in the Online Harms White Paper,” the Charter states. “This regulatory framework will require companies to take reasonable and proportionate action to tackle harmful online content and activity on their services.”
In the section of the Online Harms White Paper pertaining to the independent regulator, it states that a “key element of the regulator’s approach will be the principle of proportionality.”
“Companies will be required to take action proportionate to the severity and scale of the harm in question,” the White Paper continues. “The regulator will be required to assess the action of companies according to their size and resources, and the age of their users.”
While light on specifics, the White Paper also includes the hint of a threat: comply with the “codes of practice” (which are to be established by the new, independent regulator) or else.
“Companies must fulfil their new legal duties,” the White Paper states. “The regulator will set out how to do this in codes of practice. The codes will outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.”
And if no specific code of practice exists for any given issue or situation?
“Companies will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology.”
I’ve only scratched the surface of these new policy documents, which are voluminous to say the least, but already you can discern a pattern here: The UK government proposes a massive, sweeping rethink of how online content is regulated, then kicks the ball to a regulatory body of some sort to iron out the details.
If the stammering progress of the pending age-verification requirements is any indication, we may be in for a long wait before we learn what the new, independent regulator (whomever it may be – some in the UK media think Ofcom will be designated as that regulator) comes up with in terms of more specific requirements and protocols.
In the meantime, I suspect we can count on the UK government to churn out more policy guidance, more wish-lists for the behavior it expects from online companies and users – and a great deal more ambiguous policy masquerading as coherent regulatory strategy, as well.
Code of practice image via Gov.uk