Artificial intelligence is everywhere, in every discipline, in our daily lives—motion capture, robotics, AR, VR, 3D, skeleton detection algorithms. So, we ask, what shape does technology and specifically AI take in dance, the most corporeal of art forms? Most recently, cutting-edge technologies have blown open choreographic possibilities and at the same time raise pressing questions: What happens to the corporeal body in AI? Can a robot that learns choreography make the choreographer redundant or inspire her to greater work? What might be the political implications and inherent biases inherent to the technology? Who holds the power: humans or machines? Are we entering the realm of the post-human in dance?
Yet, dancing with the machine is not new. Trisha Brown, in “Homemade,” in 1966, danced with a projector strapped to her back. The filmed dance was projected onto the walls, floor, and ceiling in synchronization with live dancers. In Charles Atlas’s video experimentation with Merce Cunningham, “Blue Studio,” in 1975, images of landscapes, filtered through a blue-screen backdrop (courtesy of the then new technology “chroma key”) were superimposed over the dancer. Digital artist Paul Kaiser’s work “Ghostcatching,” in 1999, attached motion capture sensors to Bill T. Jones’s body letting the computer create virtual renderings of his movements.
More recently, though, choreographers such as Pontus Lidberg, Wayne McGregor, William Forsythe, Jacob Jonas, Katherine Helen Fisher, and Mimi Yin have made inroads in the technological landscape with AI algorithms. By using the technology as an exploratory creative tool and as an integral part of the choreography, these artists, and others, aim to expand audiences’ perceptual faculties to help us literally “see” dance in new ways. Forsythe, for instance, uses 3D computer animation to explore the structure of a dance and how those structures can be expressed visually. Lidberg, on the other hand, is less interested in creating movement from AI than in exploring what happens when human beings (dancers, the audience) interact with AI and how that interaction may change them. Lidberg wonders: “Can the dancers provide a body to an AI? Can the AI give instruction and provide a discursive landscape for the dance? Are emotions bound to the body, or can they be conditioned?”
Enter Rashaun Mitchell + Silas Riener who are interested in all these questions. Alumni of The Merce Cunningham Dance Company, Mitchell and Riener broke new ground in their 2017 dance work “Tesseract,” performed at The Brooklyn Academy of Music. A visually opulent work, playing with the notion of dimensionality, “Tesseract” opened with a 3D dance film (directed by Charles Atlas), followed by six dancers, in diaphanous dress, moving in front of a large scrim. Their dancing was captured by mobile cameras that projected the dancers back onto the scrim where they appeared as wondrous,
spectral presences. For “Open Machine,” Mitchell and Riener have created an intricate, multi-layered work that incorporates a variety of technological modalities from AI image generation and algorithms to electronic music and video. Using a system called Large Language Models, “Open Machine” employs an eight-channel live speech-to-text transcription, which is projected onto LED walls mounted above the stage. Together, dancers and computerized images and text, in real time, are in constant play with one another. As Silas Riener said of the work, “It is structured and organized around the relationships between the dancer and the dancer’s own body; the dancers’ relationship to others; and the dancers in relation to what’s happening in the music and on the screens.” Ultimately “Open Machine” promises to be a rich meditation on the merging of human and machine intelligence, and the kinds of social and spatial structures that govern our bodies on and off the stage.
Julie Malnig (Professor, NYU Gallatin) is a cultural historian of theater and dance performance.