As in life, so also in my research, I am consistently drawn to objects that produce feelings of estrangement and uncanniness. As these feelings engender curiosity, I experience them as both motivating and liberatory. And so, in relation to this week’s focus on algorithmic inequalities, I am most drawn to practices that engage algorithms as collaborators possessed of as potent a form of agency as their human counterparts. Particularly, I am interested in forms of practice wherein there is bidirectional constraint between the human and its inhuman partner. That is, I am interested in forms of practice where there is a back and forth in which both parties attempt to navigate and anticipate the other, but ultimately fail, and through failing create a strange object.
As mentioned in my group’s presentation, predictive text poetry and flash fiction is an example of this form of practice. Here the human is constrained to crafting the start phrase, choosing subsequent words from three possibilities provided by the algorithm, and deciding when to stop. The algorithm in turn is constrained by the choices that the human has made, and by the weights within its dictionary - weights that are continually changing based on the human’s ongoing use of the device beyond this specific instance of writing. For me, what’s most interesting about predictive text writing is that as one builds a text in this way, the recurrence of certain words and phrases as suggested by the algorithm creates an odd approximation of oneself that blurs the distinction between oneself and the machine. That is, as I begin by attempting to craft sense out of the algorithm’s output - as I begin by attempting to acclimate to and predict it - I quickly shift to attending to how it attempts to acclimate to and predict me. Across this shift, what I am doing - selecting words from the suggested list - stays the same. What changes is any sense that the algorithm is distinguishable from myself. It’s life is its execution and that execution is always a relation between its materiality, my own and my habits. And so, when interacting with a predictive text algorithm while writing, I find myself constrained by a transmutation of myself that I recognize as myself even as it is deeply alien.
Constrained writing is of course not at all limited to its computational instantiation. (Arguably all writing is constrained once one accounts for genre, versification, syntax, audience, etc.) With respect to writing that foregrounds constraint as its raison d’être, Oulipo comes to mind as one of the more prominent and productive constrained writing movements of the last century. Their exercises heavily limit a writer’s agency in order to produce strange juxtapositions and novel patterns that might otherwise not surface from beneath said writer’s idiolect and style. (Effectively, Oulipo pursued surprise similar to but using vastly different techniques from modern neural nets.) The shock of distance and oddness produced by their constraints comes from being forced to translate one’s thoughts into a newly estranged language, and through that translation to be forced to confront one’s inherent being as a relation between one’s body and a nonhuman other. In the case of Oulipo, that nonhuman other is natural language.
Just to end, Ross Goodwin, an artist working with AI (whom he considers to be a collaborator) has a project word.camera, ‘an image-to-text narrator […] that automatically generates poetry from photographs using machine intelligence.’ This project began with a physical camera that Goodwin would carry out into the world, but it has since expanded to an online app that anyone can use provided their device has a camera. I took a few shots earlier tonight, and, after matter of factly labeling a few of the items it could see in my room, the app proceeded to produce associational poetry, in which my image is as much a function of my face and my home as it is this instance of machine vision and its strange involutions of natural language.
Works Cited
https://futureofstorytelling.org/project/word-camera
The word camera project is really interesting. However, I wonder how much of this program is influenced by the emotions displayed in the photo or maybe even affect. For example, the photo you used has an image where you are not smiling, and one of the lines states ‘lightly freckled with anger’. I’m also really interested in the focus of whiteness. In both of your photos, the program picked up on the white objects in your backgrounds more than any other color. Is that because the program just picks up on the white in an image better or is there more associational poetry related to white objects. Really interesting program though.
This is so rad! And it ties into something else I noticed this week about predictive text (a feature which I assume is one from the latest iOS update)--a friend messaged me to "send photos," at which point my predictive text generated the option to "Choose Photos" (see attached screenshot). Predictive "text" seems to be moving into the world of "predictive actions/choices"... or maybe these were always the same thing in different forms? I feel like there's more to be said on this point, but the word.camera as a active object fits nicely into this blurring between text/image/action.
Super interesting! I particularly love how the app continues to "see" the person as it generates its text, superimposing the text as it is written character by character on the live camera feed from the computer/device. I'd be curious to see how the output changes depending upon the environment, or if the program prefers domestic settings (cats, curtains, beds).