The majority of my students use social media in some fashion. Some are, to some extent, aware of how algorithms collect data and use it for advertising purposes. What they often don’t realize, however, is the trap of “assumed objectivity” that algorithms exude. A large part of this is understanding how algorithms work or, in the very basic sense, what they are. We often shortcut something vastly complicated like the internet into simpler metaphors like the cloud. As James Bridle notes in his New Dark Age,
The cloud was a way of reducing complexity: it allowed one to focus on the near at hand, and not worry about what was happening over there. Over time, as networks grew larger and more interconnected, the cloud became more and more important. Smaller systems were defined by their relation to the cloud…(6)
A similar “chunking” or simplifying of something very complex into a small, easy to think about (but devoid of full context) form happens when we address algorithms. That and the fact that many algorithmic processes do not involve ONE algorithm but many. Algorithmic processes have become so large and complicated that no one person on a development team knows how the whole thing works, and yet to the everyday internet user (for instance) Google search is rather straightforward. To help my students to start asking different questions about the internet technologies they utilize and rely on everyday, I adapted what I have learned from technology, cultural, media, and surveillance studies scholars into three principles. In this post, I have tied each algorithmic principle to a corresponding “in class” activity. Many will probably find these principles far too limiting, but they are somewhere to start before delving into Noble’s Algorithms of Oppression, O’Neil’s Weapons of Math Destruction, or Zuboff’s Surveillance Capitalism (whose work these principles partially derive from).