General Description

WHATAREYOULOOKINGAT is an audience-involved, interactive meta-instrument for data sonification. The overarching goal of the performance is to extract data from the audience emulating a type of government-sanctioned privacy infringement, manipulate and share the data between the three primary performers, and present that data in the form of surround sound audio and interactive, three-channel video projection.



Program Notes


Performs strong and soft selection of target’s real-time activity: indexes every e-mail address seen in a session by both username and domain, logs every file seen in a session by both filename and extension, scans client-side HTTP traffic, collects every phone number and associated content from digital cell phone calls, and aggregates chat activity to include username, buddylist, and machine-specific cookies.


Exploitation technique that takes advantage of web-based protocols and man-in-the-middle position. It influences real-time communications between client and server and can quietly redirect web-browsers to FOXACID malware servers for individual client exploitation. This allows mass exploitation potential for clients passing through network choke points, but is configurable to provide surgical target selection as well.


Broad real-time monitoring of online activity from YouTube video views, URL’s “liked” on Facebook, Blogspot and Blogger visits, and other social media activity.

Technical Description

A variety of technologies are employed to create the immersive, data gathering environment that forms the centerpiece of WHATAREYOULOOKINGAT. One of the primary sources of personal information collection is the Facebook API. Using a publicly available URL standard, WHATAREYOULOOKINGAT retrieves a wealth of information uploaded to a person’s Facebook page including their profile picture, birthdate, city of residence, profession, and assorted tastes in music, movies, and books. The text from this rich source of data is presented via the Apple text-to-speech tool, modified for streamlined generation and vocal variation in Max.

The visual elements used a custom-designed interactive lighting system. A projector mounted to the lighting grid was reflected 90º down to the floor by a first surface mirror. An Xbox 360 Kinect camera, mounted alongside the projector, was used for its inexpensive infrared camera. To illuminate the darkened performance space, a medium-sized LED infrared lamp was also mounted near the Kinect. The system was routed to Max 6 and made use of Jean-Marc Pelletier’s Computer Vision for Jitter and the OpenKinect project’s libfreenect library. The culminating three-channel projection again used the Facebook API to retrive publicly accessible images from both audience members and strangers in real-time and display them as an immersive “sea of faces.”

Related Work

Sol Invitus

fixed 5-channel projection and 2-channel audio

Double Helix

interactive projection and generative 2-channel audio

INSTRUMENT | One Antarctic Night

interactive VR installation