Hello, Fail Whale

Watercolour painting of a beached whale, surrounded by birds. (AI generated image)

It’s been a long time since I’ve posted anything here. Or anywhere, really. Except my private Twitter account, where only a small number of people get to see my small number of tweets. My personal life over the last few years has been such that, for now, keeping a low profile is appropriate. But the select few followers of that locked account have provided me an outlet, when one is required, for the thoughts that refuse to stay in my head. And even when I’m not actively tweeting, the knowledge that they’re there has helped in no small way to keep me sane when I could very easily have fallen apart.

Those days, it seems, are now numbered.

Twitter’s new billionaire owner, the racist, transphobic, inhuman product of every privilege afforded to his rich white family by apartheid, is quickly burning down the house he just bought on a whim. He’s spent the last couple of weeks firing all the people who kept the machine running, without due process and therefore without any of the carefully planned handovers that are required when key personnel leave a technology company. He is no doubt going to lose a lot of costly legal battles over the coming months as the direct victims of his “management” style assert their rights in numerous jurisdictions (he’s in for a shock if he thinks employment law is universal), and I think it can be taken for granted that a great many of those who haven’t yet lost their job (or quit in protest) are now actively looking for a new one anyway. A complex infrastructure that isn’t backed by knowledge and experience is in dire peril, always just one unforeseen incident away from a potentially unrecoverable meltdown.

At the same time, the overgrown toddler has alienated the very people that give the platform its capital value: users and advertisers. Without the users, a social network is nothing. Users provide the content. We choose to provide that content in exchange for access to the platform – an arrangement that is equitable enough for most people. The interactions between users create a community that attracts new members, often without the company having to spend a penny on promotion. Advertisers, in turn, are paying for access to the community. And they demand two important things along with that:

Firstly, they want to know that their brand(s) will be promoted to as many relevant users as possible. Just as you probably wouldn’t waste your time trying to market condoms in the Vatican, companies want to know that the money they’re paying is going to result in actual sales. This is the job of the Algorithm. Designing such algorithms is hard. You don’t just buy a copy of Learn Python in 59 Seconds and come away knowing how to do this stuff, even if you buy a copy of Psychology for Nitwits along with it. Getting to the point where brands can submit an ad and be pretty sure of a decent conversion rate (without being blatantly unethical) has been an evolutionary process, the work of many, many people over a long period. And as anyone who has worked with that sort of organic system knows, when the ones who understand how it works are gone, you poke around inside it at your peril. And when the boss is a playground bully demanding unreasonable changes from too few staff on unrealistic timescales, you’d better believe something is going to break.

But of course it’s worse than that, because as a simple matter of numbers, if your users are fleeing then fewer people are going to see the ads at all. Which means that even if the conversion rate remains static (and it’s not going to improve), advertisers are going to see a drop in sales. The output of any algorithm is only as good as its input. Which leads onto the second demand: Brands must never be allowed to become associated with Bad Stuff. Ensuring this is the job of content moderators, who must enforce carefully balanced rules of behaviour.

Twitter already had a problem in that department, with certain forms of abuse (notably transphobia) being routinely ignored, and disinformation spreading faster than a fart in an elevator. They had started to tackle some of these issues, with some particularly egregious transgressors being banned, and fact-checking mechanisms put in place. Now they are ruled by a dictator who claims to champion “free speech”, but who defines that as having the freedom to be as nasty and dishonest as he wants – as long as nobody uses that same “freedom” to criticise him, of course.

No sooner had the hateful narcissist entered the building than the rate of posting of a well-known racial slur went through the roof, as the scum of the community – previously held in check to some extent by the rules – tested the water to see just how much they could get away with. As the man himself said: let that sink in. Without having to change a single rule, simply by stating his own, twisted viewpoint, he was able to make Twitter a significantly less safe environment for millions of users. The moderators, if there are even any left at this point, can’t effectively enforce policies that are directly contradicted by their boss. This means that the “average” tweet is now much more likely to be toxic, not least to advertisers. And the more of them there are, the more chance that a promoted tweet is going to be juxtaposed, or associated, with one that has the capacity to seriously damage the brand. And that’s before considering the offensive odour generated by the rabid ondatra zibethicus at the top.

I’m currently seeing almost no ads on my timeline. I don’t know how typical this is yet, but it certainly suggests to me that advertisers that the Algorithm would normally pick to promote to me have already decided they’d rather spend their money elsewhere (assuming someone’s not already broken it). Many more will follow. Twitter was already unprofitable; I’d be surprised if it can survive a massive drop in income. The emerald eejit’s brilliant idea – replacing account verification with a pseudo-protection racket – will not only not come close to offsetting the lost revenue, but it makes the platform even less attractive to both users and advertisers. There’s already been a deluge of new “verified” accounts with the capacity to cause chaos for brand managers everywhere, and to make sorting the truth from the chaff near impossible. The trust is well and truly broken.

If you know my personal politics it might seem odd for me to be talking about things in largely capitalist terms. But the demise of Twitter at the hands of an escaped lab experiment with more money than sense (or ethics) is fundamentally a capitalist phenomenon. It could only happen like this in a society that allows individuals to accrue disproportionate power by material acquisition. And while to the users Twitter is a platform and a community, underneath that it’s your typical, financially underperforming tech business, ever on the edge of bankruptcy, that manages to keep the lights on by maintaining cash flow and promising to make a profit eventually. Take away the cash flow, and the lights go off. Some vestigial version of the platform may remain, albeit without the engaging content or enough money available to continue functioning well, but I can say quite confidently that its spiral into irrelevance cannot be averted at this point.

I referred to the screwed employees earlier as direct victims, because there are also countless indirect victims. They are the community. Or more correctly, communities. Twitter has never been a perfect platform, precisely because of its centralised, power-imbalanced, capitalist nature. But it somehow became a place where minorities, and people with shared interests, would find kindred spirits and become a greater, more supportive whole. While so-called influencers might just have been there to chase clout (though Instagram and TikTok cater to them better these days), many others were there to make real connections with people, to share and spread knowledge, to pursue social justice, and more. For all the loudmouths with millions of followers, the real joy of Twitter was in having a circle of friends you’d probably never have met offline, especially during a global pandemic that has forced us to reëvaluate how socialising works in a suddenly much more dangerous world. Were it not for Twitter, I would not have met my best friend, and my life would be very much the worse for it.

All that has changed now. A great many good people have left. Others have stayed, either out of misplaced hope, lack of a clear alternative, or simply an inability to avert their gaze from the train wreck. But there’s a palpable change in the atmosphere; the air has become noxious and the communities are evaporating. With talk of a paywall, plummeting income, the loss of critical expertise, and active encouragement of toxic behaviour by a thin-skinned, spoiled brat whose genius-level business plan is “do lots of dumb things”, Twitter is now on life support. All good things, it seems, must indeed come to an end.

Along with many others, I’ve decided to create a personal Mastodon account now, before the bird finally falls off its perch. The main reason I hadn’t done so earlier is that I couldn’t take my friends with me; however, that has become moot. Like anyone settling into a new home, I hope to be accepted by the neighbours, but we refugees have a responsibility to be good citizens too. I’ve had more than my share of antisocial jerks living next door to me and, just as I’ve learned to stand up to them in meatspace, I would totally deserve the pushback (or indeed a ban) if I barged into an existing online community and took a huge dump on their virtual carpet.

The Fediverse is not Twitter. And that’s a good thing. It is a multicultural, heterogeneous network; there are many different but interconnected platforms, of which Mastodon is only one. And while you can easily follow and interact with many other users regardless of where they are on that network, every instance‌/‌server hosts its own community, with its own identity and social contract. This is something we absolutely must respect. Coming from a mixed space where all discussions have equal priority, content warnings are rare (and frequently pointless), and friendship and hostility can be found in equal measure, there is the risk that we’ll bring with us an attitude of assertiveness that may have been necessary there, but runs counter to the culture of the community we’ve joined. Let’s not do that.

So to begin with, I’m not going to post much. I’ll just get the lie of the land and learn how the locals would like me to behave. For now I’ve picked a place that claims that Nazis and bigots aren’t welcome, and already it’s clear that life is much more peaceful there. If for some reason that particular local community turns out not to be a good fit for me, the beauty of federation is that I can move to another instance and take my connections with me. Either way, I suspect that sooner rather than later I’ll be so used to village life that I’ll stop shuttling back to the city, and I’ll wonder why it took me so long to leave.

A screenshot of a tweet by Elon Musk, which reads: “Please note that Twitter will do lots of dumb things in coming months. We will keep what works & change what doesn't.”

An Open Letter re: Article 13

Text, in a low resolution bitmapped font and styled as though displayed on a green CRT display, reads: “HTTP Error 451: Unavailable for legal reasons”

To all MEPs, but particularly those representing Northwest England*, where I am a constituent:

I write to express my deep concerns regarding Article 13 of the EU Copyright Directive which comes before Parliament tomorrow. As both an experienced computer scientist and a musician, I am directly affected by this legislation and believe I am qualified to comment on it.

The very premise of the Article is flawed, based more on science fiction than reality. Given the sheer amount of communication and content shared across the Internet, the proposed law effectively mandates that automatic systems be put in place. However, the implementation of accurate, intelligent content filters is a problem that has never been, and possibly will never be, solved. Consider that for decades people have been trying to implement obscenity filters for text, with very limited success. Even today, words and phrases with dual meanings, words that look like compounds of swearwords, legitimate quotation, and simple typos all get caught in the net. It is not hard to see the problems this must cause for an individual named Dick Cockburn, or the tech company Fanny Wang.

Meanwhile those who wish to maliciously circumvent the filters merely need to devise alternative spellings and vocabularies, knowing that the technology will always be one step behind them. You only have to look at all the variations on “Jew” that anti-semites use to get around the flawed technology. The harder you try to catch these workarounds, the worse the problem becomes, with the implementors caught in the middle; when people are typing “fukc” instead of the correctly-spelled swearword, your employer may order you to start filtering anagrams, but was the intention really to censor the discussion of Cnut the Great?

And text filtering is the gentle introduction. One only needs to watch an automatically subtitled programme on TV or Youtube, or ask Amazon’s Alexa a question in a regional accent she doesn’t recognise, to see how quickly more advanced content detection falls apart. Youtube’s ContentID system can’t even cope with the case where you pay an agent to police your copyrighted content and then also upload it yourself; it requires manual intervention to get your own work unblocked. How is it supposed to identify legitimate, legal uses of my work by a third party? Are they really going to cope with manually verifying the legality of every single case where somebody has their upload blocked incorrectly?

Placing a legal responsibility on service providers turns this task from a paid-for service, which limits the damage it can cause, into a universal burden. The biggest technology companies in the world, such as Google and Amazon, can’t even do it yet. Youtube’s ContentID produces innumerable false positives, which as already stated need to be resolved manually. If they become liable themselves for copyright violations even when the rights-holder has not paid them to police it, it is naïve to think that they will continue to intervene at a huge loss.

While the article states that it should be implemented “without prejudice to the possibility for their users to benefit from exceptions or limitations to copyright”, as well as other statements to similar effect, the truth is that that is currently a technical impossibility. Providers will have little choice but to make all automated decisions final, or to charge a fee or impose some other onerous requirement to resolve them. As an artist, this terrifies me. It wouldn’t matter whether I had actually violated another person’s rights; it would be sufficient that an imperfect algorithm had decided that I did.

“Online content sharing service providers shall provide rightholders, at their request, with adequate information on the deployment and functioning of these measures to allow the assessment of their effectiveness, in particular information on the type of measures used” – This dystopian clause creates a two-tier copyright system, where the large media companies can potentially, in the name of protecting their own rights, demand access to the work of programmers (whose output is also covered by copyright), with legally-granted leverage to insist on changes to that work.

Worse still, smaller providers will have no chance of implementing even the flawed technology currently used by the giants. They will not have the resources to develop it in-house, may not be able to buy it in without risking violating the above clause on providing “adequate information” on a system itself protected by copyright, and they will not have the infrastructure to run such a computationally intensive process themselves. In short, they will be faced with a choice between risking being held liable for things out of their control, adding Draconian clauses to their terms and conditions in an attempt to shift that liability to their users, or ceasing to serve user-submitted content entirely. This will have a serious chilling effect on freedom of speech, because it will become difficult to find a platform that is prepared to allow discussion of certain topics, especially when those topics necessitate the legal use of copyrighted material e.g. quoting a book in an review or academic essay.

There is much more I could say, not only on Article 13 but on other parts of the Directive. However, I understand that you have limited time and so I have concentrated on what worries me the most. I must ask that you oppose at least this Article when it comes before Parliament, before our freedom of speech is taken from us for no gain, due to a fundamental misunderstanding of technology.

Julian Yon – musician and computer scientist
Manchester, Northwest England, UK, EU.

* The MEPs for the Northwest of England include Wajid Khan (@WajidKhanMEP), Jacqueline Foster (@jfostermep), and Sajjad Karim (@SHKMEP), who at the time of writing have not declared how they will vote.


2017 is not the first year I’ve been absent from church a lot – my fluctuating conditions mean that sometimes it’s just not possible to attend, or at least not a good idea. And this year, my wife has had health challenges of her own too. But I think this is the first time I’ve missed the entire of Advent and Christmas. So it’s been an unusual one. Continue reading Pianissimo

EQ carving for visual thinkers

If you’ve got a track in your mix that just isn’t coming through, or perhaps there’s a section of a song which is really crowded and it’s hard to make out any individual instruments, a fairly standard suggestion will be to “make some space” or “carve an EQ hole” for the parts that are getting lost. Which is great, if you know what that means. If, on the other hand, it has you reaching for a hammer and chisel, then perhaps you need a different metaphor. Continue reading EQ carving for visual thinkers

Single released!

So, yesterday was my daughter’s first day at high school. Wow. And as promised, I’ve just released my first ever single, See You Again. It’s a love song to her, which I wrote earlier in the summer, to mark this milestone. It’s a very personal song which I’d love you all to hear. It’s available now on Google Play, and should appear via other channels (iTunes, Amazon etc) once they have processed it. Continue reading Single released!