Flong

Printed from www.flong.com/texts/interviews/interview_direct_digital/
Contents © 2020 Golan Levin and Collaborators

Golan Levin and Collaborators

Interviews and Dialogues




Interview by Marco Mancuso for Direct Digital exhibition catalogue

Interview by Marco Mancuso for the exhibition catalogue of "Direct Digital", Modena.


Please tell us about your most recent body of work, your "Eye Contact Systems". 

Since 2005 or so I have been exploring the possibilities of artworks which can know something about how they are being observed, and which can modify themselves accordingly in response. I don't think the day is too far off when such a thing is commonplace in visual culture, though I expect we will probably see this form of behavior become a standard feature in advertisements before it is widely adopted in the arts. Already companies like Xuuk are working on enhanced public displays which are able to count and detect the instances when someone is looking at them, and even select a different image if their eye-contact count is too low. This brings a very literal quality to the old adage that the purpose of advertising is to “sell eyeballs”.

The special thing about eyes, to use a hardware analogy, is that they are both input sensors as well as output displays. The eyes deliver visual impressions to the brain, but also announce the subject of our attention, and reveal something about our inner feelings. For this reason, eye contact is a spectacularly interesting mode of communication, and thus a promising frontier for human-computer interaction and interactive art. There are so many ways to think about “the act of looking”, whether social, philosophical, artistic, or technological. We know, from psychology experiments for example, that men and women look at images in different ways – they focus on different features. And when we look at something which can respond to our gaze, or can even look back at us, then suddenly we become involved in a tight loop of non-verbal communication. This mode of communication, called oculesics, is the realm of our encounters with animal intelligences; of the “male gaze” as it is discussed in media studies and art history; and the root of the ancient notion that the eye is the “window onto our soul.” The most extreme perspective is from Reader-response critical theory, which posits that gaze is the sole force by which artworks are constituted in the mind of the observer. I think this has interesting existential implications when those artworks are not just static texts, but intelligent agents in their own right.

For me, the main subject of my work is interaction itself. A quote I like very much is from Myron Krueger, one of the first computer artists, who wrote in 1974 that “response is the medium”. The core idea is that the true “content” of an interactive artwork is not its surface appearance, but the effect it has in-the-world through interactions. For this reason I try to explore many different kinds of interactions, hopefully developing them into something provocative and moving. Most recently, my projects have used usef “eye tracking” and “gaze tracking” technologies in order to develop provocative interactions based on the act of looking. “Eye tracking” refers to camera-based computer techniques for determining the location of a person's eyes, and “gaze tracking”, to the more difficult technical challenge of determining the direction in which a person is looking (or even, perhaps, what they are looking at). These technologies have actually been around for several decades, though they have always been extremely expensive and therefore limited to highly specialized purposes. Until recently, one could only encounter a gaze-tracking system in the cockpit of a military fighter plane, to help the pilot launch missiles; in the wheelchair of a seriously paralyzed person, to help them type words or move a cursor on a computer screen; or in some psychology research laboratories, which use gaze trackers to study aspects of human attention.

Gaze tracking systems usually require their user to be seated and, often, to wear a special helmet with a head-mounted camera system. The best systems also demand that each user work through a lengthy calibration procedure, in order to tune the system to their specific eyes. By limiting the range of the user's head movements, placing cameras very close to the user's eyes, and putting the user through a calibration routine, these systems can provide very accurate information about where someone is looking. But obviously this is a terrible configuration for appreciating art. Nobody wants to put on such a helmet, with cables sticking out everywhere, and then endure ten minutes in a calibration procedure before being able, finally, to look at a piece of art – it is an absurd proposition. Thus I have been attempting to develop a gaze tracking system which can be used in the same way that one commonly addresses a painting: by approaching it casually, without having to wear special headgear; by walking around it freely, at a distance of perhaps a meter or two; and without requiring special preparation time for calibration beforehand. This problem is called “unconstrained, calibration-free gaze tracking” and it is very difficult. Perhaps it will be considered solved in five or ten years. But for now, it is an active area of research at my university and in many other computer vision laboratories around the world.

The interactive project Eyecode was my initial attempt to explore the conceptual and aesthetic realm of eye-based interactions. It presents an image wholly constructed from its own history of being viewed. When the user approaches the display, she sees a grid of small videos of eyes. In the course of looking at these eyes, her own eyes are recorded, and in this way, she necessarily contributes a new video of her own eyes. Thus each person “looks at the looking” of the person before them, in an endless recursion. The arrangement of the eye videos resembles some kind of typography, wherein each eye is a “character”, a connection to my longtime interest in writing systems.

Whereas Eyecode is essentially a feedback display for recordings of observation, the Opto-Isolator is an exploration of what it might mean for an artwork to respond to its observer with eyes of its own. For this artwork I sought to reduce down eye contact into its simplest possible form. The artwork consists of a robot with a solitary mechatronic eye, which engages in a variety of familiar and uncanny forms of eye-contact with its human viewers. Most of the time, this eye simply tracks its viewer. If it is observed for a long while, the robot will grow “agitated” and dart its gaze around the face of its viewer. Eventually it attempts to look away, as if shy. In addition to these behaviors, the robot also blinks exactly one second after its human visitor. My objective with this piece is to turn spectatorship on its head, and call into question who is really the observer and the observed in this intimate and interactive situation.

The mechanics of Opto-Isolator were designed and built by Greg Baltus of Standard Robot Company in Pittsburgh. Greg is an absolutely amazing engineer, equally comfortable working on a Mars rover or an artwork. About ten years ago he was a member of the hacktivist group, Institute for Applied Autonomy; he built their famous GraffitiWriter robot which caused such a stir at Ars Electronica 2000. As for the software governing Opto-Isolator, this is mine, written using the open-source OpenFrameworks environment as well as the free OpenCV toolkit for computer vision.