Module 3 Formstorming

Weekly Activity Template

Zixin Zhang


Project 3


Module 3

In the beginning of this project, I explored how to use p5 and face-tracking to create a symbolic selfie. Using my face as data and turn it into different visual expressions. I followed tutorials to understand how the facemesh works, and then I started changing the layers and drawing styles to see how identity could be shown through movement and interaction.

I also experimented with 3D scanning to explore a more physical way of representing my identity. I scanned different objects that were meaningful to me, such as a cable pouch I always carry, a wooden yoyo from my childhood, and a pinecone from my neighborhood. Some scans worked well, but others were difficult because thin or complex shapes did not scan cleanly. I imported the better scans into Cinema 4D, cleaned up the meshes and tested simple renders.

After comparing the two directions, I found out that using face-tracking gave me more space to experiment with identity, emotions, and self-representation in real time. Because of this, I chose the p5 symbolic selfie as my final project path.

Activity 1

I watched Harold’s 2D tutorial to understand how the p5 face mesh example works. Here, I started modifying the p5 facemesh tracking code and exploring how changing certain parameters affects the face tracking output. I tried changing the elements to create different identities. I also experimented how reordering the layers could represent different visual expressions. I used the template from Giorgia Lupi data portrait ideas to explore how personal information can be visualized. I studied the layout and tried to understand how colours, lines, and shapes work together to represent identity. Following that template, I created my first data portrait sketch in Adobe Illustrator. This version represents who I am right now. Next, I designed a second data portrait to express who I want to be. This sketch focuses on my “dream identity,” using a different visual to contrast with the first portrait. Finally, I created a refined version of both data portraits. These final iterations show two distinct visual identities, my present self and my aspirational self with different compositions. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I put my data portrait elements into the p5 sketch and use facial recognition to play with different angles. I collected data about my design process. I used large yellow shapes as the background to represent the messy thoughts in my mind. The colourful dots show my ideas appearing from my mind. However, after testing, I realized that the rectangular layout and large size were not ideal for facial recognition. So my next step is to reduce the size and add more empty space to make the visual work better with face tracking. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I collected data about my hobbies. I drew simple shapes and icons that represent the different activities I enjoy or have tried before. This design uses illustration to visualize my everyday identity. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I collected data about the questions that inspire me. I used my face as data and placed reflective questions onto different geometric shapes. Each shape carries a question, encouraging people to think about their own identity as the geometry shifts with the face tracking. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I put my visual elements into the p5 sketch and use facial recognition to play with different angles. I put my visual elements into the p5 sketch and use facial recognition to play with different angles.

Activity 2

My classmate and I tried scanning an object for the first time during class. We placed the model on the paper board and experimented with how to move the camera around it. Even though the first scan turned out very poorly, we reviewed what went wrong and adjusted our scanning method. We slowed down our movement and tried capturing more angles around the object. Here, I took a screenshot during the scanning process. I noticed that even when the stripes disappear, it doesn’t always mean the scan is complete. Especially the top or bottom can still be missing. This is the side view of the plush toy my classmate chose as our test object. I used it to experiment with 3D scanning for the first time, and the result came out surprisingly clear from this angle. Here is the front view of the plush toy. My classmate chose as our test object. I used it to experiment with 3D scanning. The details are quite clear. Here is the back view of the plush toy. My classmate chose as our test object. I used it to experiment with 3D scanning. Top view of the cable pouch. After the initial experiments, I moved on to scanning the object that I chose, a cable pouch. It’s something I always carry with me whenever I bring my backpack, so it feels like an essential everyday item. Front view of the cable pouch. When I scanned it, I found that the details were captured very clearly, especially the texture and shape. Side view of the scanned cable pouch. All angles of the texture and details were scanned with good clarity. Side view of the hair tie. I scanned a hair tie with a bow, which I chose because it symbolizes my identity of being a female. Top view of the hair tie. The final scan quality was not very good. The bow shape was captured, because the string is so narrow, the scanner picked up surrounding textures and merged them incorrectly. Top view of the hair tie. Some details on the band were missing, and the edges turned out blurry and unclear. Side view of the yoyo. I chose a wooden yoyo because it was one of my favorite toys when I was a child, and it represents a part of my early entertainment and memories. Side view of the yoyo. When I scanned it, the wooden surface came out clear. The wood grain and the painted zig-zag patterns were captured well. Right view of the yoyo. However, the string had the same problem as my earlier scans, it’s too thin, so the scanner blended its texture with the table underneath. Front view of the pinecone. This pinecone is something I picked up from my neighborhood, so I chose it to represent the environment I live in. Side view of the pinecone. However, even after scanning it many times, the results were not very good. The structure of a pinecone is made of many small, repeated layers that are very close together, and the scanner had trouble separating them. Top view of the pinecone. A lot of the details blended into each other and created a messy, model. This type of object is not very suitable for 3D scanning because of its complex and tightly packed structure. Side view of my face. I also tested scanning face, and I noticed that hair was much harder to capture. The scan made the hair look blocky and unnatural. To clean up the yoyo model mesh, I first exported it as an OBJ file and opened it in Cinema 4D to do that. When I switched to Polygon Mode, I noticed that the surface was covered with a very dense mesh. To improve the model without losing too much quality, I used Polygon Reduction and tested different values until I found a balance between fewer polygons and a still clean surface. After that, I started fixing the detailed areas. I deleted the bottom plane that came from the scanning surface and removed the messy geometry around the string where the textures had merged with the table. I also filled the gaps at the bottom of the yoyo to make the shape more complete. Once the cleanup was done, I rendered a quick preview to check the final quality.

Reflexive Workshop 1 & 2

For this selfie, our team explored the idea of exclusion in design. In the scene, one person reaches out for a handshake, while the other person cannot complete because he does not have an arm. This moment creates a visible gap between them, which becomes a metaphor for how certain groups,especially people with disabilities are often left out of design decisions. This selfie shows a student with ADHD and the challenges she faces when using digital tools. An app screen full of notifications to show how easily she can feel overwhelmed. While creating this setup, I realized how often designers forget about people with attention difficulties, simply because their needs aren’t always visible. This selfie focuses a design student with mild visual impairment. I created a scene where she is looking at her phone and trying to read text that is too small. I added simple illustrations like glasses to show her need for clearer visuals. Through this process, I found out that how small design choices, such as text size can affect someone’s comfort and ability to focus.

Project 3


Final Project 3 Design

Link to Final Design

At first, the face is still blurred. As the process continues, the image breaks into small moving fragments. This visual at this stage expresses how personal identity begins to lose clarity when it is analyzed and divided by digital systems. In the final stage, the face becomes garbled text, where visual details are replaced by simple symbols. This change shows how a person can be reduced to readable data rather than seen as a human presence.

The process used ml5.js FaceMesh to track facial keypoints in real time. Then I isolated the face using a bounding box, manipulate pixel data, map brightness values to characters, and combine multiple visual layers as textures in a 3D space.

×

Powered by w3.css