“Algorithm 01” by Dimitris Ladopoulos is licensed under CC BY-NC-ND 4.0
The majority of my students use social media in some fashion. Some are, to some extent, aware of how algorithms collect data and use it for advertising purposes. What they often don’t realize, however, is the trap of “assumed objectivity” that algorithms exude. A large part of this is understanding how algorithms work or, in the very basic sense, what they are. We often shortcut something vastly complicated like the internet into simpler metaphors like the cloud. As James Bridle notes in his New Dark Age,
The cloud was a way of reducing complexity: it allowed one to focus on the near at hand, and not worry about what was happening over there. Over time, as networks grew larger and more interconnected, the cloud became more and more important. Smaller systems were defined by their relation to the cloud…(6)
A similar “chunking” or simplifying of something very complex into a small, easy to think about (but devoid of full context) form happens when we address algorithms. That and the fact that many algorithmic processes do not involve ONE algorithm but many. Algorithmic processes have become so large and complicated that no one person on a development team knows how the whole thing works, and yet to the everyday internet user (for instance) Google search is rather straightforward. To help my students to start asking different questions about the internet technologies they utilize and rely on everyday, I adapted what I have learned from technology, cultural, media, and surveillance studies scholars into three principles. In this post, I have tied each algorithmic principle to a corresponding “in class” activity. Many will probably find these principles far too limiting, but they are somewhere to start before delving into Noble’s Algorithms of Oppression, O’Neil’s Weapons of Math Destruction, or Zuboff’s Surveillance Capitalism (whose work these principles partially derive from).
Algorithmic Principle #1: Algorithms Are Solutions to Problems
The very first thing I want students to understand is perhaps the simplest definition of an algorithm I can give: they are first and foremost solutions to problems. They do follow specific steps to create their answers, but at the end of the day they are producing an answer to a problem defined by their creators. To help students understand this, I have them play a game I took from Marcus du Sautoy’s short BBC documentary, “Algorithms – The Secret Rules of Modern Living” (1:55 to 4:15):
Setup: Take a clear bowl and fill it with 13 pieces of candy (I find that wintergreen lifesavers are a safe bet) and one undesirable thing (a red hot chili pepper like in the documentary or a k-cup of coffee works just fine too!).
Instructions: Ask for a volunteer. Explain that you are going to play a game. You will each take turns taking candy from the bowl. Players can take either 1, 2, or 3 candies (no more, no less). When the bowl is emptied, the next player has to take the pepper/k-cup (they lose). The only other rule is that you (the teacher) goes first.
The Trick: On your first turn you must take 1 candy. From then then on, you are following a simple algorithm: 4-minus whatever number your student takes. If you student takes 1 candy, you take three. If you student takes 2 candies, you take two. If your student takes 3 candies, you take one. This will ensure that the last of the 12 candies that remain after you first turn will always be taken by you (and that your student will always have to take the pepper). I usually let up to 3 students try.
The Explanation: After 2-3 students have tried I ask them if anyone knows how the trick works. I explain that I am following an algorithm that is based solely on their behavior. The first turn, I always take one. From then on, I am basing my decisions according to an algorithm that responds to how many candies they take (our combined number each round always being 4). The algorithm you have deployed has nothing to do with not ending up with the pepper, it only ever considers whatever 4 minus the number the other participant took.
Algorithmic Principle #2: We Build Assumptions Into Algorithms
The next principle is a big one, and I find a little bit harder for students to wrap their head around. I got the idea for this one from my readings of Noble’s Algorithms of Oppression and O’Neil’s Weapons of Math Destruction. When I talk to students about assumptions, they often don’t see how an assumption would influence technology made for a specific purpose. How can a technology be “biased” if it is humans who are biased? This second activity is meant to help students understand that even seemingly innocuous norms do come from specific cultural understandings of how things work. In my time in student affairs working with student staff, we did a lot of icebreakers (I mean A LOT). After reading about algorithms, I realized that the over-done “Line Up By Birthday Without Talking” game might have something to lend here.
Setup: Ask for up to ten student volunteers (you could do an entire class of 20 if you wanted to, but I know not every student is up for hands on stuff). Explain that they need to put themselves in the order of their birthdays (that’s it, no restrictions on communication).
The Trick and The Explanation: You gave very simple instructions, but in keeping it simple, they have made a lot of assumptions for what this line-up will look like. I usually ask where the first person is and then I go down the line as expected (they usually get themselves in order no problem). I then ask the class as a whole the following questions:
- Why did you line up left to right? Why not right to left? Why not a circle?
- Why did you start at January and not August?
- Why didn’t you factor in year of birth?
Whether we consider them or not, even something as simple as organizing ourselves by birthday has very specific assumptions built into it. We assume that this list should be a line, and that it should be in order from left to right, etc. because we have normed these things (whether it is tied to how we read Western texts left-to-right is something my students talked about the last time I did this activity). The trick with norms is that we DON’T think about them (that is the power of a norm, it isn’t thought about, it is second-nature and therefore assumed). In whatever process they followed to organize by birthday (an algorithm in its own right) they have already structured assumptions into that process. Algorithm designers, with their much more complicated problems, certainly do as well.
Algorithmic Principle #3: Algorithms Do Not Care If The Data Is Objective
The third principle is an interesting one because it challenges the assumed objectivity not of the algorithm but rather what the algorithm draws from: data. When we start to look at where oppression is perpetuated in algorithmic technologies, a big part of the problem is that the data it draws from is massively flawed and biased to begin with. I think of the controversial stop-and-frisk policies that have generated vast amounts of arrest data that have then been organized into crime statistics. There are a lot of problems with these policies and the ways they lead to over-policing the most vulnerable communities (the podcast Reply All did a really great two part episode on this). If you over-police one community, but under police another, do the resulting crime stats remotely reflect an “objective,” larger picture of crime in the whole city? No, not really. But this data is fed into an algorithm all the same, one that “does not care” if the data is objective (it just uses it). O’Neil has a helpful clip for connecting these dots for us when we think about how algorithms are used to help judges with sentencing decisions:
The activity I tied to this one to is another one from my time in student affairs (from a presentation I attended by Dr. Maura Cullen). This is, therefore, another activity that requires some movement (unless you are in a big space, you might have to take this one outside).
The Setup: Shuffle a standard deck of cards (you might remove the face cards to make this easier). Explain that you are going to go around the room and give students a card facing away from them. THEY MUST NOT LOOK AT THEIR CARD but rather hold it out in front of them for others to see. Once each student has a card, tell them they must walk around the room holding their card out in front of them, greeting each other as they pass (ie. a hello or a wave). They must treat the people they pass according ONLY to the card they see in front of them. If they see someone with a high card (a King or Ace for instance) they should rush over to greet them. If they see someone with a low card (like a 2) they should avoid this person at all costs. If they are somewhere in between, adjust ttheir greeting as appropriate. After they spend 5-10 minutes roaming around the room, debrief.
The Trick: They don’t know what card they have, but based on other people’s behavior towards them, they can probably guess what range their card is in. They are also focusing so much on reacting to other people’s cards that they are not paying attention to the reactions other people are getting (someone with a high card might not understand how badly another person is being treated).
The Explanation: As they played the game, they only reacted to the number, not the context for why a person might have been assigned that number (in this they were “dealt” their card, but things like credit scores have far more circumstances than that). Algorithms function similarly; the data they deal with is numerical in nature and inevitably has most of the sociocultural context removed). As a class we discuss different circumstances that might be erased. We also watch O’Neil’s short video from above and discuss how it relates to the activity (depending on the class, and time constraints, you can also watch the short clip as a class instead of doing this activity; you can also have them listen to the 99% Invisible Podcast episode that interviews O’Neil about her work).