#064: Algorithmic Racism
Social media lives on feeds. Go to Facebook, the first thing you’ll see is the news feed, pre-sorted for your convenience. If you open Twitter, you’ll get a feed with recent tweets that Twitter thinks are interesting. On Instagram, you get, you guessed it, a feed of photos, also sorted the way Instagram thinks you’d like to see. YouTube also shows you videos it thinks you’d be interested in watching.
No human could ever hope to sort through all the material that is uploaded every day. YouTube alone gets more than 80.000 hours of new video every day. Even if you’d only restrict yourself to updates from people you follow, as soon as you have more than a few in your feed, it can become a slog to go through everything.
So, programmers turned to machine learning. It promised to curate our feeds for us, and to only recommend us posts, videos, or images we’d probably be interested in seeing. It sounds like a sweet solution: the computer pre-sorts all the new content for you, and you get to enjoy the fruits of its labor: Only interesting things.
Of course it is too good to be true. Not only are feeds not always best suited to the task at hand, instead putting extra cognitive load on your brain. Worse, their recommendations are rarely all that good. Even companies with practically unlimited resources such as Facebook or Google (which owns YouTube) have not managed to create good recommendations despite collecting basically everything you do online: We are all trapped in the “Feed”. Pressure to appeal to the lowest common denominator, to always have something, anything, new to put into the feed, and the desire of these companies to keep you on their sites as long as possible, so they can sell more ads, all put the machine-curated feed at odds with you, the user. At worst, these algorithms present harmful content to vulnerable people, like small children.
Amongst programmers and computer scientists, there’s a certain acronym: GIGO. It stands for Garbage In, Garbage Out. It servers as a reminder that no matter how good your algorithm, no matter how carefully designed your app is, if you only feed it garbage, you’ll only get garbage out of it.
And so it is with machine learning, only now they’re being fed on popularity contests, prejudices, and highly emotional content. Surprisingly, what we get out of it are popular posts, not good posts, ones that reflect the prejudices of the society we live in, targeted to elicit your emotional response, not a rational one: Algorithmen : Programmierter Rassismus (German).
None of this is good, but Facebook et al. can only do so much damage, right? While I doubt we really know how much damage social media can do, you also have to realize that other organizations use the same algorithms to do their business, with similar problems. “Flash crashes” resulting from algorithmic high frequency trading, for example, have become accepted events in the financial community, even though no one really understands why exactly they happen.
Algorithms are also in charge of recommending who to hire, who to fire, or who is likely to reoffend. It does not care wether there’s correlation or causation, only that there’s a connection. Anywhere data is generated, and computers are involved, machine learning is, or will soon be, making predictions.
These algorithms are black boxes. None of them can really give you an explanation for why it acted the way it did. It still falls to us humans to watch over them, to correct them, and ensure that their recommendations are not blindly followed.
Other interesting links from around the web:
- How long would you have before you ran into trouble if you were given a golf ball that doubled in density once an hour?
- The Future of TV Is About Couch Shows vs. Phone Shows
- It’s Complicated: Twitter Ruined My Relationship
📖 Weekly Longread 📚
Carbanak’s suspected ringleader is under arrest, but $1.2 billion remains missing, and his malware attacks live on: The Biggest Digital Heist in History Isn’t Over Yet