Printed from www.flong.com
Contents © 2014 Golan Levin and Collaborators
Golan Levin and Collaborators
- 07 2011. QR Codes for Digital Nomads
- 01 2011. Eyeshine
- 12 2010. Re:FACE, Anchorage Version
- 07 2010. Self-Adherence (for Written Images)
- 06 2010. Rectified Flowers
- 01 2010. GML Experiments
- 12 2009. New Year Cards
- 11 2009. Mobile Art && Code
- 04 2009. Merce's Isosurface
- 03 2009. ART AND CODE
- 02 2009. Code, Form, Space
- 01 2009. Admitulator
- 10 2008. IEEE InfoVis 2008 Art Exhibition
- 07 2008. Double-Taker (Snout)
- 05 2008. Poster design for Maeda lecture
- 01 2008. Solo exhibition at bitforms gallery
- 11 2007. Opto-Isolator
- 11 2007. Eyecode
- 11 2007. Interstitial Fragment Processor
- 11 2007. Reface [Portrait Sequencer]
- 10 2007. IEEE InfoVis 2007 Art Exhibition
- 05 2007. Ghost Pole Propagator
- 08 2006. Footfalls
- 04 2006. Signal Operators
- 02 2006. The Dumpster
- 09 2005. Ursonography
- 09 2005. Scrapple (Performance)
- 09 2005. Scrapple (Installation)
- 10 2004. Glharf (or Glarf)
- 08 2004. Motion Traces [A1 Corridor]
- 05 2004. The Manual Input Workstation
- 05 2004. The Manual Input Sessions
- 03 2004. Finger Spies
- 02 2004. Interactive Bar Tables
- 01 2004. Civic Exchange Prototype
- 12 2003. Messa di Voce (Installation)
- 09 2003. Messa di Voce (Performance)
- 07 2003. Amore Pacific Display
- 09 2002. Axis
- 09 2002. Hidden Worlds of Noise and Voice
- 09 2002. Re:MARK
- 05 2002. JJ (Empathic Network Visualization)
- 03 2002. Stria
- 02 2002. The Secret Lives of Numbers
- 10 2001. Dendron
- 09 2001. Dialtones (A Telesymphony)
- 05 2001. Alphabet Synthesis Machine
- 03 2001. Interactive Logographs
- 02 2001. The Role of Relative Velocity
- 01 2001. Obzok
- 09 2000. Scribble
- 08 2000. Segmentation and Symptom
- 07 2000. Introspection Machine
- 03 2000. Audiovisual Environment Suite
- 12 1999. Slamps
- 09 1999. Banded Clock
- 09 1999. Dakadaka
- 02 1999. Floccular Portraits
- 01 1999. Floccus
- 12 1998. Stripe
- 09 1998. Meshy
- 04 1998. Directrix
- 01 1998. Yellowtail
- 01 1998. Interval Projects
- 01 1997. Blebs
- 01 1997. Streamer
- 08 1996. Rouen Revisited
- 05 1994. Media Streams Icons
JJ (Empathic Network Visualization)
2002 | Golan Levin and RSG (Radical Software Group)
Synopsis: JJ is a software agent which uses facial expressions to visualize the emotional content of network traffic. Serving as both a network surveillance tool and an empathic information visualization, JJ is implemented as a Carnivore Client, an open-source format for network surveillance applications.
While many visualizations rely on charts or graphs to convey numeric data, other visualization research has leveraged certain affordances of human cognition in order to represent information in a more qualitatively readable way. One important example of this is the work of Hermann Chernoff, who pioneered the use of cartoon faces as a tool for portraying high-dimensional multivariate data. Chernoff's research demonstrated that our intuitive and highly sensitive ability to interpret facial expressions could be incorporated into unusually legible visualizations of complex information.
JJ is an autonomous software agent who displays facial expressions appropriate to the emotional content of the words that are presented to him. Implemented as a Carnivore Client, JJ literally "puts a face" on the information transmitted through his host network, in order to provide a data visualization of the network's "emotional content." JJ operates according to a mapping established between two well-known psychological databases: (A) Ekman and Friesen's set of "universal facial expressions" — the set of face photographs which have been shown to embody basic cross-cultural human emotions (namely: anger, fear, surprise, disgust, sadness and pleasure) — and (B) the Linguistic Inquiry and Word Count (LIWC) dictionary by Pennebaker, Francis, & Booth, which categorizes the "emotional associations" of several thousand common English words, and provides an efficient and effective method for evaluating the various affective components present in verbal and written speech samples.
JJ scans his host network for text packets, reading each packet one word at a time. When JJ finds a word that matches a term in the LIWC dictionary, his emotional state (represented as an array of affective activation levels) is updated in response to that word's emotional associations. JJ then displays a (morphed) mixture of facial expressions, weighted according to the current intensity of his different emotions. Considered cumulatively, JJ's expressions reflect the overall "mood" of his information environment in an extremely simple, yet direct and unmistakeable way.
At present, JJ's emotional responses conform to those of Pennebaker's statistical "everyman": for example, if JJ sees a word commonly associated with disgust, then he will present a "disgust" face. An alternate version of JJ could permit his user to modify these associations, and thus modify JJ's apparent personality (so, for example, a "perverted" JJ might appear happy when he hears a 'disgusting' word, while a "repressed" JJ might appear angry).
Carnivore, created by RSG, is a surveillance tool for data networks. At the heart of the project is CarnivorePE, a software application that listens to the Internet traffic (email, web surfing, etc.) on a given local network. CarnivorePE serves this datastream over the net to a variety of interfaces called "clients." These clients are each designed to animate, diagnose, or interpret the network traffic in various ways. Carnivore clients have been produced by a number of computational artists and designers from around the world.
Carnivore and CarnivorePE were developed by Alex Galloway and the Radical Software Group (RSG) in 2001-2002. Many thanks are owed to Alex Galloway and Mark Daggett for their generous encouragement and support of JJ. The facial images used in JJ were scanned from Ekman & Friesen's 1972 book and from Young (1997); JJ gets his name from the subject photographed in the 1972 study. The LIWC dictionary used in the JJ software was purchased from Lawrence Erlbaum Associates.
Bates, J. (1994). The Role of Emotion in Believable Agents. Technical Report CMU-CS-94-136, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.
Chernoff, H. (1973). Using faces to represent points in k-dimensional space graphically. Journal of American Statistical Association, 68, 361-368.
Ekman, P., Friesen, W.V, & Ellsworth, P. (1972). Emotion in the Human Face. Cambridge University Press.
Ekman, P. (1999). Basic emotions. In T. Dalgleish and T. Power (Eds.) The Handbook of Cognition and Emotion. Pp. 45-60. Sussex, U.K.: John Wiley & Sons, Ltd.
Ippolito, J. (2002). Carnivore. Artforum, June 2002.
Pennebaker, J.W. (2002). What our words can say about us: Toward a broader language psychology. Psychological Science Agenda, 15, 8-9.
Young, A.W., Rowland, D., Calder, A.J., Etcoff, N.L., Seth, A., & Perrett, D.I. Facial Expression Megamix: Tests of Dimensional and Category Accounts of Emotion Recognition. Cognition 63 (1997), 271-313.