This video is generated by a generative adversarial network-based object detection model, a machine-learning technique used to simulate synthetic and plausible instances. By adding self-shooting images that depict a transparent figure's daily life into the database, the model failed to recognize and regenerate the transparency. The video is ambiguous, as it is aimed to liberate from its symbolic representation and surveillance. The character, a figure who possesses a non-recognizable identity, but rather only a sign of existence, has retreated to the surrounding environment and the anonymity of the masses. Through Amazon Mechanical Turk, various anonymous online workers were hired for this project to answer questions that relate to the video. Lisha Nie designed the sound that corresponds to answers from workers.