Module 3 Formstorming

Emily Revell


Project 3


Assignment 1

This week we explored the abstraction of data and how we can create portraits using our own data. There were two quizzes from a TED talk article that we could start with. The quizzes asked this or that questions and showed us how to visualize it. I found it fun to self reflect and share more things about myself even though during some of the questions I would pick an in between answer rather than just commit. The hardest part about this activity was ensuring that all parts of the design saved properly and would fit into the window of the viewport. Initially I didn't save the pieces of the design properly as everything became squished into the middle until I added a no stroke no fill rectangle to all parts of the design. My next issue was that all of the pieces were way too big that when my face was a normal distance from the webcam you could not see the full design unless very far away. To fix this instead of using a 1000px by 1000px artboard I used a 300px by 300px board. In the end my favourite design was the more square design because when you moved your face the elements would move in a satisfying circular motion (especially with a low liveliness value). In general, this activity helped us think about reflexivity and how to translating personal data into an abstract representation using code.

This is the answers for the blaze face quiz. This is a sketch of my blaze face answers. This was the recreation of the blazeface sketch made in Adobe Illustrator. After saving the parts in Adobe Illustrator when I first put the parts in the code everything was centered. I asked Lucy what I did 
          wrong and she showed me I had to save the parts with a no stroke or fill rectangle. This ensures that the parts of the sketch remain in the 
          spot you originally intended them to be. After fixing the Illustrator file when I put the pieces back into the coder it was way too big. I had to go almost half way
          across the room for all of the pieces to properly fit in the window. This visually shows how the second attempt reacted when the face was at a normal distance away from the webcam. To fix this problem I attempted to make the artboard smaller as I started with a 1000px by 1000px board. In the end, I had to make the 
          artboard 300px by 300px so that the design would be able to fit in the window. This shows a properly working blazeface deomonstration where the face is an appropriate distance from the webcam. This is a video of the blazeface illustration with a higher value of liveliness. This is an image of the code so that I can remember the values that were changed. When liveliness is a higher value 
          it takes a lot less head movement for the illustration to move and it often moves very sparatically. This is a video of the blazeface illustration with a much lower value of liveliness (original value was 0.75). This is an image of the code so that I can remember the values that were changed. When liveliness is a lower value 
          it takes a lot more head movement for the illustration to move and all pieces move very smoothly (almost an ease in and out effect). This is an image of the answers to the TED talk article about data portraits. This is a sketch of the answers from the TED talk article. This was the recreation of the TED talk article sketch made in Adobe Illustrator. This is an image of the pieces of the TED talk sketch inside the blazeface code. This shows the order all the pieces are in inside the blazeface code. This is a video of the TED talk design with a very high value of liveliness. This is an image of the code so that I can remember the values that were changed. With a very high liveliness many of the elements overlap with one another in a circular motion. This is a video of the TED talk design with a very low value of liveliness. This is an image of the code so that I can remember the values that were changed. I retook the blazeface questionare where I picked one or the other answers rather than what I perceive myself to be. For example,
          my frist design made the bottom section purple where I picked both sides on whether I ignore or follow the rules. For this new design 
          I was more honest where I know that I really follow rules for about 90% of the time. This is the sketch of the new blazeface design. This was the recreation of the blazeface sketch made in Adobe Illustrator. This is a video of the second blazeface sketch in the coder.

Assignment 2

This week we worked with lidar and photogrammetry to 3d scan objects to fix inside a 3d model editor like cinema 4d and meshmixer. Unfortunately, I forgot to save the scans that utilized different lighting such as plain ring lights and rgb lights. When I did scan you could see the different hues of light on the object as you looked at different angles of the scanned item. Additionally, you could see the hard shadows made on and around the object when utalizing a strong light. My initial difficulty was knowing what data I wanted to use to represent myself and go from representational data to abstract. Many of the fun trinkets I would want to use are unfortunately back home in Windsor and I did not want to try and teach my 56 year old dad how to 3d scan objects. Therefore, the two ideas I came up with was either a theme of stuffed animals or hand gestures that I use in a daily basis. I would also try to explore how can these elements go togehter sculpturally and abstractly. Finally, the hardest part of this assignment was getting consistent scans of the objects and remembering the setting in meshmixer that would pixelate (cube) the 3d model. For example, the pre-render showed the whole hand minus some fingers while the render just had the thumb or would have inconsistent textures (it was much easier scanning stuffed animals).

This is the 3d model of baby yoda (Grogu) in meshmixer after using scaniverse. Scaniverse scan of my emotional support doll named special baby. It was difficult to properly scan all features of the doll's face. This is a photo of the untextured model of the doll inside meshmixer. This is the 3d model of a tiny stuffed dragon in meshmixer after using scaniverse. Thought about cuteness agression and had Lucy squeeze the stuffed dragon. This is the 3d model of that in meshmixer after using scaniverse. I tried to edit the model with the sculpting tools inside meshmixer. This is the 3d model of a stuffed capybara in meshmixer after using scaniverse. This is the use of the selection tool inside meshmixer. It was used to select and delete excess material. This is a photo preview of what reducing the amount of polygons would look like on the capybara. Initial scaniverse scan of a stuffed strawberry before edit inside meshmixer. This is the 3d model of a stuffed strawberry in meshmixer after using scaniverse. Initial scaniverse scan of a stuffed dumbo before edit inside meshmixer. This is the 3d model of a stuffed dumbo in meshmixer after using scaniverse. Notes the the kinds of hand gestures that I have used and or use on a daily basis. Pre-rendered scan of the asl sign for love in scaniverse. Rendered scan of the palm in scaniverse with the hand making the asl sign for love. Rendered scan of the back of the hand in scaniverse with the hand making the asl sign for love. Pre-rendered scan of a hand making the htumbbs up sign in scaniverse. Rendered scan of a hand making the thumns up sign in scaniverse. Unfortunately the the app had difficulties redering fingers. Rendered scan of the palm in scaniverse with the hand making the vulcan salute. Rendered scan of the back of a hand making the vulcan salute in scaniverse. Second rendered scan of the palm in scaniverse with the hand making the vulcan salute. Second rendered scan of the back of a hand making the vulcan salute in scaniverse. This is the 3d model of a hand making the vulcan salute in meshmixer after using scaniverse. I slightly edited the webbed fingers to make
          the silhouette more recognizable.

Project 3


Final Design

I explored the relationship between self-awareness and identity through the mixture and correlation between traditional huadian makeup and drag.
  • I engaged with lecture content by utlizing the lecture theme of self-awareness and the link of idenity for my data points. At first I had a difficult time deciding what kind of data points I wanted to presue about myself as I did not want it to be too serious of a topic like sense of self and surveilance. I thought about showing the positive and negative thoughts I had or pursuing patterns that confuses facial recognition software. After much more thought and three due dates later I thought about my identity and how in someways I feel very detached (afraid but not necessarily ashamed) about certain aspects of it. I wanted to explore aspects such as my herritage, my sexual identity, and how I feel not fun enough to be around. I am Chinese but in some ways don't feel asian enough, I am part of the LGBTQ+ community but I feel like I don't express myself enough, I am funny but I am afraid of judgement. Therefore, with this mask I wanted to put these aspects of myself front and center to express as well as celebrate my identity. This video summarizes my work and goes beyond assignment 1 and 2 by utlizing a different kind of code as well as using more sentimental data points rather that surface level things such as being a night owl or not. The code I used is by the coding train and tracks 468 different data points on the face. For this project due to difficulties with my computer I did not want to risk 3d modeling so I only experiemented with some ideas before deciding to do the digital mirror. Finally, I wanted to exlpore the relationship between traditional huadian makeup and drag performance to see how it applied to my self identity rather than link certain aspects of myself to material culture.

Video of mask demo, video did not work when loading in.

Powered by w3.css