top of page
  • Writer's pictureShiao-ya (Maggie) Huang

The Making of Surveillance Forest

designed & created by: Vamsi Grandhi & Shiao-ya (Maggie) Huang



Introduction

56.8% of the world's total population is using social media apps such as WhatsApp, Facebook, Telegram, etc. Since Singapore is one of the developed countries, the percentage shoots up to 88.5%. It is astonishing to be well connected and to be able to talk to anyone. However, are we compromising on our privacy too much? Majority of the users are not focusing on the infringement of their privacy rather on the usefulness of the service. But, how do you feel if someone is always watching you?


When we are stalked or followed in the physical world, we often feel uncomfortable and scared. However, when the same behavior is translated online, we tend to give it less severity and even normalize it, making us desensitized. Most of us know about this issue, yet most of the time we hand over our own data and surrender our own privacy to a few tech giants. Though we do care about privacy issues, most don’t care enough to leave the social media platforms we are using. These companies probably know more about us than our best friends. They know when to show us ads of a winter coat, or suggest music that I will like, or show me videos I need to watch. We begin to take what they feed us; their algorithms are indeed shaping some of our thoughts and might be controlling.


This robotic art installation is the embodiment of a social media user being followed virtually in real life using an immersion setup that watches them as they move around the space. It consists of eyeballs mounted on branches placed in the woods, representing social media and the media giants infiltrating into our natural habitats without us paying much attention to it. As the user moves around the woods, the eyeballs will turn and follow the user, hereby bringing to life the tracking behavior from social media applications, creating an analogical experience of our online experiences that we have come to normalize.




Components Used

  • 5 x Raspberry Pi

  • 5 x Rpi V2 Noir camera

  • 5 x SG90 Tower Pro (180 deg servo)

  • Jumper Wires

  • 5 x Birch wooden base

  • 5 x Birch wooden sticks

  • 5 x Acrylic balls

  • Resin

  • Spray paint

  • Acrylic paint

  • Printed iris design on paper

  • Others (tape, cable ties, screws, adhesives)



Ideation

To start with, we had several ideas. For example, a robot car that follows people and serves cookies to demonstrate the behavior of online cookies. Another one being a sculpture that comes to life, whose eyes follow the viewer blurring the edge of who is watching who. We were also interested in creating animatronics. Combining the ideas of follow-bot and animatronic, we decided to settle on creating eyeballs that follow humans. As a result, Surveillance Forest was born.




Design & Building Process


Skeleton

Our hardware design consists of a wooden base, wooden stick, eyeball, servo motor, raspberry pi board, and rpi camera. First, we made a hole in the wooden base so the wooden stick could fit inside. We secured them together with epoxy glue. We repeated this for four more wooden bases and stick and created the skeleton for five robots with varying heights.



3D Printing

Originally, we wanted to 3D print the eyeballs in customized sizes. We designed the eyeballs in SolidWorks first. We managed to print 1 working eyeball. It took around three hours. However, repeated attempts to reprint more copies using the same STL lead to failed print, probably because of unclean surface or not enough support. But if we generated more supports, it would be hard to get a smooth eyeball surface.



Eyeballs

Alternatively, we decided to use pre-made acrylic plastic ornament balls and drill in the necessary openings as needed, such as camera openings. For the camera opening, we drilled backward to prevent the acrylic from breaking, and utilized the heat produced from the friction to create the opening instead. Next, we spray painted the inside of the acrylic balls with white. This preserved the glossy look on the outside. Afterwards, we made a thin mixture of pink acrylic and airbrushed it onto the eyeball. Afterwards, we stripped some red yarn (blood veins) and glued them to the back half of the acrylic ball. For the front half, we glued on the social media iris previously designed in Photoshop. Finally, we poured a layer of resin onto all of them, which gave the eyeballs a thick glossy finish. After the resin finished drying, we drilled the camera hole again. Next time we can cover the hole first, with removable materials like clay or blue tack to avoid damaging the dried resin by drilling. We also created a hole to fit in the servo flaps which was glued into the eyeball with epoxy.



Assembling

Next, we created a slit on the top of the branch to fit the servo motor and secured it with epoxy. The servo motor was wrapped with masking tape before sticking it to the branch as the original blue color of the servo did not match well with the branches. The camera is fitted inside the iris of the eyeball that is responsible for tracking the person based on the color of the clothing. A one meter pi camera ribbon cable is attached to the pi camera which extends outside of the eyeball to the raspberry pi. The raspberry pi is attached to the branch with screws. We also soldered wires together so the wires attached to the servo motors could reach the raspberry pi


For improvement, we also created a gap for the ribbon cable to come out of the eyeball without being pressed. This was made with the suspicion that the pressing of the cable might have caused the camera to not function.



Software

Initially, the idea was to use neural networks to run a pretrained model for pedestrian detection to track the motion of a person, so that anyone from the audience can participate in the immersive setup. However, due to time constraints and low storage of the SD card, it would be difficult to squeeze all the necessary libraries in. Hence, we started focusing on face tracking that worked quite robustly with the help of a laptop camera using a python library “face_recognition”. We had to switch to pure OpenCV algorithms for trying face tracking as the face recognition package also uses deep learning libraries that take up a lot of storage on rpi. However, the results were not quite decent in tracking the face using the rpi v2.1 noir and pure OpenCV filters such as haarcascade_fullbody.xml, haarcascade_frontalface_default.xml and other haar cascade filters, especially when it is mounted onto a servo motor.


So, we decided upon tracking the color of the accessories of the person in order to provide the audience with the experience of continuously being watched. Continuous video stream is taken as input from the rpi camera. Using OpenCV, the images are converted from Color Space to HSV space for detecting the color easily using the color mask that is defined to detect Red color. We have tried a lot of other colors like Black and white in various other settings. However, tracking red gave us the best results.

After detecting the color, a rectangular counter is drawn around the color to find the center of the color. We initialize the servo's 90 deg to be the center of the frame. If the center of the color deviates from the center of the frame, the servo is commanded to move either left or right. To reduce the jitteriness of the servo motion, we used the factory setting of gpiozero to control using pwm.


Check out final code here!

Video

Originally, we wanted to shoot in the Lalang fields at Jurong Lakeside Gardens. However due to time constraints, we decided to shoot in school instead. We burrowed a Blackmagic Cinema Pocket Camera 6k and wanted to shoot in the green screen studio, nevertheless our eyeballs contained green and blue colors, which would make it difficult to composite. Thus, we decided to shoot in one of the forests in NTU.


On the shooting day itself, we encountered several problems with the hardware and software. For example, two pi cameras weren’t working, and virtual raspberry pi stopped working, etc. By the time we set up everything and made sure everything worked, the sun was setting already. We carried everything to the forest and started to set up. The ground wasn’t flat so our robots had a hard time to balance and stand. Our biggest one even fell and broke before we could even shoot. The camera couldn’t detect the colors well since the sun was gone already.


At the end, we decided to make a virtual forest in the lab, further incorporating technology to create an immersive experience. In post-production, we further emphasized the relation between nature, human, and technology by subtly incorporating technology sound effects under the atmospheric forest sounds.



Get Immersed!



Recent Posts

See All

Comments


bottom of page