Designing the Absurd
Project 2: Chindogu
Why did you work on this project?
  • To help people communicate more efficiently.
  • facial expression contains a lot of information, have better understanding if they see
  • emotional and okay some indication of some muscle movement indicate people's like, situation condition.
  • I see. That makes a lot of sense and also the body language its products. Right. So with a mask, all of these emotional and like situational cues are covered up and we want to we want to let people express those cues again.
  • the problem with speaking is that only one person can speak at a time. If you can, you know like often when you're talking to someone, you remember how they're you can see their expression as you're talking. So then you can get feedback like as you're talking otherwise, it would be really hard to do that if we didn't have Yeah, without nonverbal cues. Yeah. without interrupting.
  • Yeah, it's kind of why it's easier to talk to someone in person than to like send a Slack message, because you don't know how they're gonna react until you send the whole thing.
How did we build this?

parts

  • we have a screen
  • we have an IR sensor
  • then we have a 3d printed like, mold for it right, or a 3d printed part that attaches the screen to the mask or attaches all the parts to each other?
  • Is there also a battery? This nine volt battery inside Yeah, this battery okay. So, okay, what are the complicated parts we need to talk about? We need to talk about the Oh, and
  • then we have the Arduino? Yes. Yeah. So we can talk about the Arduino on the screen is one thing.

software

  • There is a sensor that sensor is basically an IR light receiver. And we are using a library that decodes signals from standard remotes. It's okay. We think just copy and paste in here. We're decoding. I just want to add them. Okay.
  • So, this controller will take because this library takes input from this kind of light sensor. And then when you have standard remotes sending stuff, sending signals to it, it will decode that input into into like a number that we can read.
  • So we used a standard remote, we didn't build our own remote, we use the standard remote and it sends different signals for different buttons. And so we pressed it to just see what code would come out. And then we assign those codes to different things.
  • So what we did is when the user presses a button, they will. We're going to get the code and each code maps to a different image, which we'll go into in a bit. So a different like facial expression image. And so we have X number of facial expression images, and
  • we also added a sticker layer on top of the remote so that so that you can see what facial expression will be initiated when you press the button.
  • One challenge here was to make sure that pixel or sorry to make sure one challenge was to create the images for all the expressions. There's just so many of them and it's a lot of pixels to draw. So we made a JavaScript based pixel image editor that outputs directly into the format that we can put into our code
problems
  • having a remote to change your expressions feels like it could be distracting. You have to look at the remote to figure out which facial expression you want to make. If it was easier to feel that different buttons on the remote and you knew which one was which that would make things a lot easier. Yeah. So you have to look us controller to figure out which button to press that kind of is challenging.
  • Yeah. People don't have a way to put it in their own expressions. And one of the nice parts about expressions is that you they are your own right here. You can only use the expressions that we chose.
  • I'm just concluded that first it is heavy. We have to know the first challenge if it is heavy.
  • It's kind of restricted by the screen colors and the pixel. But I think even if the screen was larger, we would still have this problem. People still might be confused how the image on the screen translates to an expression you might make or their face. It's it's never going to be exactly one to one just just the camera inside mask. So we'll never do that. You might as well make a transparent mask. I actually like that the screen is the low rise because then people don't focus on the details right like for example, if someone has, for example, a really big pimple on their lip. Then I can't not notice it while they smile. But this allows people to convey that expression without like they can express however they want.
successes
  • It was pretty robust.
  • And so the most important thing is that people when we use the people immediately got what we're trying to do it has its use and why people use it is immediately understood
  • And I think it's easy because we've been wearing a mask for so long that it was nice for people to actually see how he felt emotionally for once.