An Open Letter re: Article 13

Text, in a low resolution bitmapped font and styled as though displayed on a green CRT display, reads: “HTTP Error 451: Unavailable for legal reasons”

To all MEPs, but particularly those representing Northwest England*, where I am a constituent:

I write to express my deep concerns regarding Article 13 of the EU Copyright Directive which comes before Parliament tomorrow. As both an experienced computer scientist and a musician, I am directly affected by this legislation and believe I am qualified to comment on it.

The very premise of the Article is flawed, based more on science fiction than reality. Given the sheer amount of communication and content shared across the Internet, the proposed law effectively mandates that automatic systems be put in place. However, the implementation of accurate, intelligent content filters is a problem that has never been, and possibly will never be, solved. Consider that for decades people have been trying to implement obscenity filters for text, with very limited success. Even today, words and phrases with dual meanings, words that look like compounds of swearwords, legitimate quotation, and simple typos all get caught in the net. It is not hard to see the problems this must cause for an individual named Dick Cockburn, or the tech company Fanny Wang.

Meanwhile those who wish to maliciously circumvent the filters merely need to devise alternative spellings and vocabularies, knowing that the technology will always be one step behind them. You only have to look at all the variations on “Jew” that anti-semites use to get around the flawed technology. The harder you try to catch these workarounds, the worse the problem becomes, with the implementors caught in the middle; when people are typing “fukc” instead of the correctly-spelled swearword, your employer may order you to start filtering anagrams, but was the intention really to censor the discussion of Cnut the Great?

And text filtering is the gentle introduction. One only needs to watch an automatically subtitled programme on TV or Youtube, or ask Amazon’s Alexa a question in a regional accent she doesn’t recognise, to see how quickly more advanced content detection falls apart. Youtube’s ContentID system can’t even cope with the case where you pay an agent to police your copyrighted content and then also upload it yourself; it requires manual intervention to get your own work unblocked. How is it supposed to identify legitimate, legal uses of my work by a third party? Are they really going to cope with manually verifying the legality of every single case where somebody has their upload blocked incorrectly?

Placing a legal responsibility on service providers turns this task from a paid-for service, which limits the damage it can cause, into a universal burden. The biggest technology companies in the world, such as Google and Amazon, can’t even do it yet. Youtube’s ContentID produces innumerable false positives, which as already stated need to be resolved manually. If they become liable themselves for copyright violations even when the rights-holder has not paid them to police it, it is naïve to think that they will continue to intervene at a huge loss.

While the article states that it should be implemented “without prejudice to the possibility for their users to benefit from exceptions or limitations to copyright”, as well as other statements to similar effect, the truth is that that is currently a technical impossibility. Providers will have little choice but to make all automated decisions final, or to charge a fee or impose some other onerous requirement to resolve them. As an artist, this terrifies me. It wouldn’t matter whether I had actually violated another person’s rights; it would be sufficient that an imperfect algorithm had decided that I did.

“Online content sharing service providers shall provide rightholders, at their request, with adequate information on the deployment and functioning of these measures to allow the assessment of their effectiveness, in particular information on the type of measures used” – This dystopian clause creates a two-tier copyright system, where the large media companies can potentially, in the name of protecting their own rights, demand access to the work of programmers (whose output is also covered by copyright), with legally-granted leverage to insist on changes to that work.

Worse still, smaller providers will have no chance of implementing even the flawed technology currently used by the giants. They will not have the resources to develop it in-house, may not be able to buy it in without risking violating the above clause on providing “adequate information” on a system itself protected by copyright, and they will not have the infrastructure to run such a computationally intensive process themselves. In short, they will be faced with a choice between risking being held liable for things out of their control, adding Draconian clauses to their terms and conditions in an attempt to shift that liability to their users, or ceasing to serve user-submitted content entirely. This will have a serious chilling effect on freedom of speech, because it will become difficult to find a platform that is prepared to allow discussion of certain topics, especially when those topics necessitate the legal use of copyrighted material e.g. quoting a book in an review or academic essay.

There is much more I could say, not only on Article 13 but on other parts of the Directive. However, I understand that you have limited time and so I have concentrated on what worries me the most. I must ask that you oppose at least this Article when it comes before Parliament, before our freedom of speech is taken from us for no gain, due to a fundamental misunderstanding of technology.

Julian Yon – musician and computer scientist
Manchester, Northwest England, UK, EU.


* The MEPs for the Northwest of England include Wajid Khan (@WajidKhanMEP), Jacqueline Foster (@jfostermep), and Sajjad Karim (@SHKMEP), who at the time of writing have not declared how they will vote.