Resist The Algorithms

You don’t have free will. One of the grand questions of philosophy is being answered today in laboratories as we come to better understand genes and the workings of the brain, and it’s becoming very clear that we actually do not have free will.

Sure, we get to make choices, but they’re incredibly constrained. It’s like in a video game RPG, where you’re given an onscreen prompt that allows you to make three different choices – yes, the choice is yours but is this really free will? In real life those choices are dictated by things like genetics (my love of sweets is likely handed down to me over the generations), time and place of birth (all of your woke beliefs wouldn’t exist if you had been born in Alabama in 1835, for instance), your biochemistry (people with toxoplasmosis, a parasite related to cats, have higher risk-taking behaviors and die in car accidents more often), and your upbringing. Yes, you get to pick from three options, but the entire world of options is never, ever available to you. That’s before we even get to physical, legal and economic constraints.

(You don’t even want to get into mind research that shows our brains make decisions for us before we are even consciously aware of the decision being made, which indicates that the function of consciousness is not to choose or decide but rather to retroactively create a narrative that explains WHY we do the things we do)

But now even these small number of choices we have are essentially under attack, all in the service of making our lives better. What if there was something that knew you so well that it could tell you what movies you would like so that you never wasted time watching bad movies? What if there was something that could choose food for you that you would love every meal? What if it could tell you the job you should have, or the person you should marry? Something that, given just enough data, could make all your choices for you, and maybe they would even be the right choices?

Enter algorithms, mathematical equations that computers perform to make recommendations and, increasingly, decisions. More and more we are handing over our decision-making abilities to algorithms, trusting the computers more and more to crunch the endless streams of data and let us know what we should do next. 

It looks so good on the surface. Every week Spotify serves me up a playlist of recommendations and I gotta tell you, sometimes it’s uncanny. There are deep cuts I forgot I love, and there are songs that are new to me but that become instant favorites. It feels like sorcery, but it’s just data science – Spotify is part of the modern surveillance state and so it has access to all this data about me, from who knows where, and it uses this information to craft these recommendations that I absolutely love. 

But as is the case with all our technological advances it doesn’t stop there. Algorithms have infiltrated all corners of our world, and they have brought a tremendous number of problems with them. Algorithms, for instance, may make the decision about your home loan, and they may make them using information and equations that are opaque to you; it’ll never be clear why you’re turned down. Worst of all, the reason may be racist; we look at algorithms as some kind of godlike super-intelligence, but they’re really just complicated bits of math and the maths themselves have been decided on by people. 

So the things that an algorithm takes into account are chosen by humans, and those humans are operating with implicit biases. By deciding that people from certain economic backgrounds, or with certain credit blips in their history are ineligible for loans the algorithm is not taking into account the complex nature of white supremacy in this country, and the obstacles that black people face from the very beginning of their lives. This isn’t a consciously racist thing – it’s worse than that. It’s an ingrained bias that is cementing inequality into the system, that is unwittingly enforcing the inequality that began as explicit bias. 

This isn’t even the darkest element of the algorithm. Algorithms, devoid of conscience or ethics, exist simply to serve up the results of their equations, and so a quick visit to YouTube somehow ends up leading down a rabbit hole of racist conspiracy. The radicalization of youth has been happening at the hands of algorithms, serving up white supremacist and QAnon videos to people who just wanted to watch video game streams. The algorithm reinforces racist and right wing beliefs of social media users because it feeds to them information that will make them happy and with which they’ll agree. The algorithm has no morality.

The algorithm removes the human element and reduces the world to data. It’s the ultimate materialist view of reality, and it serves as a great example of why the materialist view of reality comes up short. Even within our limited number of options, within our free-ish will, there are opportunities to surprise and confound, but the algorithm removes this.

The algorithm imprisons us in the past, never allowing us to properly grow and change. The racist aspect and the YouTube radicalization are the most extreme and dangerous, but there’s a low-level constantly radioactive nature to algorithmic control. It diminishes us because it reduces the possibility for discovery or surprise. More than that, it slowly erodes the possibility for disappointment, which I think is just as important an experience as enjoyment. 

It’s only be experiencing what we don’t like that we can understand what it is that we DO like. And I think it’s vital to bring mindfulness to the things we enjoy – why do I like this? What impact is it having on me? Why do I crave this music/movie/book right now? I believe we can’t understand what we like without seeing what we don’t like, without having that dynamic range.

More than that, though, is the capacity to be surprised and have our perceptions shifted. In a world where we are always being served up stuff we are guaranteed to enjoy we’re never taking any chances and our interests and tastes will be defined only by what has come before. An algorithm that got to know me in my late teens would never serve me up classic country music, because I would have hated it at the time. But in my 20s I fell for a woman who loved Gram Parsons and so I immersed myself in his songs, and from there I found myself slowly discovering the joys of country music. This was not a linear progression of my tastes, but I think I am better off today loving Hank Williams than I would be if I still dismissed an entire genre of music. 

The human touch is vital. This month the New Beverly is showing a truly eclectic range of movies, many of them themed to Quentin Tarantino’s Once Upon a Time… In Hollywood, and they are some out-there choices. I saw another film programmer on Twitter say that this was the first rep calendar in his living memory that included 22 movies he had never seen before. This comes from human curation, from the unique and strange perspective that Tarantino himself brings to the programming. A computer could have easily programmed a month of double features, and they might have been great, but they wouldn’t have been surprising and they wouldn’t have offered the opportunity for real discovery. 

Real discovery comes only when we take a leap based not on our taste but on someone else’s. This is where criticism comes in; the critic shouldn’t be telling you what you’ll like but rather what they like. It isn’t about saying “You like Marvel movies and romcoms, so therefore you’ll love Movie X,” it’s about saying “This is what I like and why I like it, perhaps you’d enjoy looking at something from this perspective.”

Surprise is also important. This is why we love stories about underdogs, people who shouldn’t have succeeded but did. Whether or not they’re statistically significant, as living beings we need the reassurance that surprisingly good things can happen. But maybe more than that we need the reassurance that success isn’t the only measure we should use. I think a lot about the movie (and true story) Eddie the Eagle, about a guy who wasn’t very good at ski jumping who went to the Olympics as a ski jumper. Eddie the Eagle is the enemy of the algorithm because he does not meet the criteria of being a successful ski jumper, and yet his effort has been incredibly meaningful and inspirational to many. The point isn’t that he is what the math says would be a good jumper, the point is that he tried, and that has meaning.

These small spaces are where we must most assert our humanity. We must jealously guard these spaces where we can have discovery or surprise, where we can be unexpected and can move beyond our past tastes and actions. We have such a small space of free will, such a small area not constrained by genetics and environment and upbringing, and we have to stake a claim to it. These are the spaces where our human-ness resides, and we should not cede them to math. This is the stand we have to take against a rampantly materialist view of the world – we have to assert there is something more to us than data, that there is an element of our existence that cannot be fully quantified and gamed out. It’s the argument that Captain Kirk made to any number of computers on the original Star Trek. It’s why we have to resist the algorithms.