In the beginning of this project, I explored how to use p5 and
face-tracking to create a symbolic selfie. Using my face as data and
turn it into different visual expressions. I followed tutorials to
understand how the facemesh works, and then I started changing the
layers and drawing styles to see how identity could be shown through
movement and interaction.
I also experimented with 3D scanning to explore a more
physical way of representing my identity. I scanned different objects
that were meaningful to me, such as a cable pouch I always carry, a
wooden yoyo from my childhood, and a pinecone from my neighborhood.
Some scans worked well, but others were difficult because thin or
complex shapes did not scan cleanly. I imported the better scans into
Cinema 4D, cleaned up the meshes and tested simple renders.
After comparing the two directions, I found out that using
face-tracking gave me more space to experiment with identity,
emotions, and self-representation in real time. Because of this, I
chose the p5 symbolic selfie as my final project path.
At first, the face is still blurred. As the process continues, the
image breaks into small moving fragments. This visual at this stage
expresses how personal identity begins to lose clarity when it is
analyzed and divided by digital systems. In the final stage, the face
becomes garbled text, where visual details are replaced by simple
symbols. This change shows how a person can be reduced to readable
data rather than seen as a human presence.
The process used ml5.js FaceMesh to track facial keypoints in real
time. Then I isolated the face using a bounding box, manipulate pixel
data, map brightness values to characters, and combine multiple visual
layers as textures in a 3D space.