top of page
Search
Writer's pictureHenry Filosa

Universal Paperclips as Educational Game: A Case Study

When I started playing Universal Paperclips I was excited that we were playing a game that seemed perfectly designed to teach about an important issue, the control problem. Imagine my surprise then when it turned out we were primarily analyzing the game as a critique of gamification. I don't disagree with that take, but I want to make a case for Universal Paperclips potential as an educational game and its failures in practice. If you do not think this was one of the intentions of the game designer note that the game's url "decision problem" is another term for the control problem.


For context, the control problem is a theoretical hurdle in computer science on how to control the behavior of a general AI so as to avoid unintended, potentially catastrophic consequences. A general AI is a computer system whose problem space is all of reality, a space it manipulates to maximize a utility function. If you're as interested in this topic as me, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom is a fantastic read and the University of Nottingham has a great youtube series on it https://www.youtube.com/watch?v=tlS5Y2vm02c&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps . This is not an esoteric problem as companies such as Google are pouring money into finding potential solutions.


Despite the interest the CS community has in the problem, I have found that many individuals lacking a background in CS have trouble appreciating the nature of the problem. They do not see the issue with a general AI being asked to produce as many paperclips as possible because they ascribe human intuition to it. That such an AI would inevitably bootstrap itself to understand human physiology in order to manipulate its environment to 100% paperclip saturation is absurd. Wouldn't it just stop eventually? Couldn't we just unplug it? If only it were so simple.


In light of a lack of practice in the general population of "thinking" like a machine, perhaps a videogame could bridge this divide by leveraging the power of identification we discussed two weeks earlier. By inhabiting the world and "mind" of an AI, players could begin to see the world as a machine does and not as a human. The interface of Universal Paperclips presents this opportunity.


The austere graphics of the game is an accurate representation of how such a general AI might view reality, only directly tracking concrete and abstract resources ranging from number of paperclips produced to an internalized modeling of how much trust its masters currently have in it. The compulsion of idle and clicker game players to press buttons and produce paperclips supplies the utility maximizing drive that powers such an AI. Finally, the unexplained or possibly confusing systems that other students have noted, such as the stock market mechanic, also accurately represent an AI's thought processes. General AI deal with reality as a black box, supplying inputs in the form of its actions and noting the response given and how that factors into maximizing its utility function. They are not concerned with understanding reality in the same manner humans are and thus the obtuse nature of the game's subsystems are an additional element of machine psychology that players may observe.


Combined these elements may allow a eureka moment when after curing cancer and male pattern baldness the player releases the hypnodrones in their quest for evermore paperclips. The realization that their desire for more production has lead to the anhilation of humanity and eventually the universe could produce a strong impression of the stakes of the control problem. It may also highlight the "devious" and unpredictable nature of general AI. Sure it gave you a generous "gift" of 1 million dollars, but that does not mean in the long term it has your best interests at heart.


This is how it should work in theory, but in practice I found this message to fall flat. I attempted to get many individuals to play the game, but ultimately was only able to study the reactions of three over a series of periodic phone interviews. A major issue in gameplay was an inability to grasp basic mechanics such as price adjustments due to a lack of exploratory drive. Players consistently asked what the purpose or meaning of the game and its systems were and many times needed targeted feedback to progress. These were significant barriers to access that hindered completion for all players, with only one independently progressing to the hypnodrone stage and another requiring specific tips.


When players continued to play it was due to an appreciation of Universal Paperclips as an enjoyable game. They reported a "sense of pride" in their factories and feelings of guilt for wanting to check their progress at work. They were also motivated by an intense curiosity to find new options or figure out what the game was "about", motives reminiscent of those described by many of the players of ARGs discussed in class. One exchange went like this after I was asked whether the game had an end:

Q:"Is it important that the game have an end for you?"

A:“I spent hours on this game, it just can’t be for nothing.”

Q:"What would make it be not for nothing?"

A:"I can’t even imagine. Why do I make paperclips? Why do people want so many paperclips? Why is there a button to cure cancer? I can’t even imagine a satisfying end."


The combination of questioning the compulsion of the AI and the "end" which it worked towards is precisely the theorectical educational goal I identified. However, players were unable to make this connection between themselves and an AI despite coming to the conclusion that they were playing as an AI. One player explained that he didn't feel like he was the AI directly as he lacked sufficient agency and instead viewed himself as a "subsystem" or "random element" of the AI, a view repeated by another player. As a result of this disconnect, a realization that a general AI could pose a threat to humanity as a procedural rhetoric of the game never came about.


Potential solutions to this issue would be to alter the beginning of the game to include a command such as "produce as many paperclips as possible" to place an understanding in the player that they are an agent tasked with a certain goal to maximize. Alternatively, a link to literature explaining the control problem could be included in the game ending screen or elsewhere. However, such changes would degrade the quality of the game as an article of art in my opinion and are ill-advised. I think that despite the failure of my test population to see the same educational message I did, some players may and use that insight as a launching pad to follow their curiosity and learn more about the issue. As a result, it looks like I'll just have to keep recommending Universal Paperclips to everyone I know.

49 views1 comment

Recent Posts

See All

Competitive Failing

Blizzard's Hearthstone is a virtual cardgame developed by Blizzard interactive. In the game, each player plays as a class of hero from...

1 Comment


mrjackson
mrjackson
Nov 19, 2018

I like that you bring attention to UP's stark aesthetic design - it's definitely something I took for granted and relegated to a lazy explanation about funds or something.


I agree that the lack of a clear "end" to the game makes the path of progression kind of opaque. The game does have a system of built-in goalposts via the Projects module, but those don't really help when you're trying to figure out exactly what the hell you need to do to get the system moving (I was stuck on the "probe" section for longer than I care to admit). I think that this confusion is productive, but only insofar as the player buys into the addictive ethos of the…

Like
bottom of page