After finishing my initial capstone prototype I wasn’t to happy with it because even though it worked very nice, it was to boring for my taste. So i decided to spin it off into something crazier.
A game that will live in your hand through projection mapping.
Calibrating the hand has two main stages. Aggregating the right amount of points and then shifting the camera. To aggregate the points needed to successfully map your hand with the projector we need to save the 3D point of the leap motions index finger and the 2D point of where the mouse thinks that index finger is. In the image bellow you can see the calibration process.
This image shows what happens when you don’t aggregate the points correctly. The image will be offset from your hand.
In contrast, this images show how it looks in OF when you have successfully calibrated the hand projections.
USOPEN Women’s Twitter Popularity -webGL/Three.js
Live App: http://womenstennis.fusion-sky.com/
I am a big tennis fan and decided to see if the tweets a player gets in a game reflects in the persons performance. In other words, I wanted to see if the fans were tweeting players hopes up (or down) and predicting the outcome. many interesting patterns were found, which made me very happy! 🙂
I initially tried to use Tamboo but couldn’t use it because Twitter API only allows you to get tweets that are 30 days old. Given that the USOPEN had been a couple of months back I did a parser to parse “Topsy”. After finishing my parser, I had to parse the data multiple times to get all (or almost all) the tweets one player got during the day of her game. It took a long time…
After aggregating the data, I cleaned it up using some text analysis libraries. (Total around ~15,000 tweets) Once everything was done, or as done as it was going to get thanks to lack of time, I started visualizing it.
I decided to use webGL and Three.JS because I really wanted to learn it.
I made a bracket time visualization with the winners in the right in shades of green, and the losers to the left in shades of red. The size of the circle represents the amount of tweets that player got in that game. If you hover over the player you get to see more information. The orange circle represents the amount of positive tweets and the purple the amount of negative tweets. The rest are neutral. Apart from the extra information; all the games of that player glow up for the user to see the success of that player.
You can still navigate the interactive visualization with the arrow keys and the mouse (zoom, etc.)
Try your outfit without having to change your cloth and share it with others for feedback!
My capstone proposal consists on creating a magic mirror that will help the user try their favorite cloth in different colors in real time. This way, you can see what shirt or sweater looks better on you. The player will also be able to “grab” the image in the mirror and “put” it in their phone to share it with friends. The world is reactive if we let it be magical.
Mobile phones bring computing power to immobile objects. With this in mind, I decided to incorporate our phones in the experience to bring the mirror even more alive. I got inspired on the video below and really wanted to do something similar. From here I got the idea of grabbing the image in the mirror and “put” it on the phone to share it with others:
There are some people also trying to bring mirrors to live. For example Neiman Marcus’ Digital Mirror record what you currently are wearing for later to compare it with a new outfit. Another example is Rebecca Minkoff First Interactive Store. Users can select their cloths in a giant touchscreen mirror. Then the cloth will be taken to a fitting room and when it is ready the user will get a text message. Inside the fitting room the user has access again to the touchscreen mirror. Here they can ask for other sizes etc. and share their findings through social network.
Even though these two ideas are very good, I wanted to bring a more magical experience to live. Magic mirror will use these two examples as case studies and expand on what works and what doesn’t. It will be real-time, and it will allow the user to share it with others through the phone gesture.
The project will be broken into 4 main stages:
- Stage 1:
- Successfully change color of clothing in real time with the use of the kinect.
- Stage 2:
- Successfully “grab” the current image and send it to your phone to share it.
- Stage 3:
- Make the magic mirror a more interesting object by using a 3D mesh and projection mapping.
- Stage 4:
“A color for the weather”
I wanted to get experience using nodeJS so I decided to do my bot using it. After looking at many bots out there and following a couple of tutorials on how to make a bot, i decided to create my own!
Every 30 min., Space Nietzsche Bot selects a city and grabs its current weather information. Once I have this information I use the temperature, precipitation and wind bearing to calculate a color representation. I create an image with that color and then i post a new tweet.
Originally the tweet had a static “The current color weather of city <bla> is <img>”
I realized this was very boring so I added some adjective randomness to make the tweet more interesting! Now it is : “The <adjective and connector> color of city is <adjective and connector>.
When adding the randomness I also added a random var that will sometimes add a random hastag to the tweet.
The bot can be found :
Cubepix Demo Test – by Xavi’s Lab
I am interested in doing something with projection mapping and mixing hardware like Arduino. I really liked this project because is a very good combination of both hardware use and projection mapping. I dont like the fact that the installation is supper loud. I know it is thanks to the motors, but If I end up using both projection mapping and Arduinos, I would try to minimize the amount of motors it had to keep the volume as low as possible. Apart from that i really like the effect the boxes do when they are rotating! It looks very cool. I also liked how they added the kinect as an interactive input. I would definitely add one too. I really think that some experiences like this the kinect adds a very special touch.
FLOW 1 | KINECT PROJECTOR DANCE
This project is more focused on kinect and projection mapping. As mentioned earlier, I feel when it comes to tracking, the kinect makes it easier to create interesting things with the depth camera. I really like all the effects the dances create. It looks like they are the ones disturbing the environment and i really like how that looks. I also liked the minimal color pallet used. I felt like the high contrast makes a stronger connection between he dancers and the disruption they are creating in the background and the mapping. I have seen many similar effects to this one, so I would definitely try to innovate the canvas, maybe by adding a real static object that will add to the canvas or combine it with LED or some type of sensor that would alter the canvas, making it less flat
I have always liked image manipulation but had never had the chance to really experiment with it. I took this assignment as a chance to see what I could create by extracting information from any image. After looking at the openFrameworks addons, I decided to use ofxUI, ofxCV, ofxTriangleMesh, and ofxColorQuantizer.
I used ofxUI to give control to the user (me) to be able to manipulate the image. ofxCV is the one giving me all the main information of the image, mostly the contours. ofxColorQuantizer and ofxTriangleMesh are both being used to add the effects to the image.
So Creative Canvas breaks an image into the main color pallet and finds the contours thanks to this colors. You can switch between contours by clicking ‘c’ to see what it selects.
In the UI, there are 5 main features that can be applied to the image: “draw triangle”, “draw mesh”, “draw outline”, “draw particles” and “random”. What these features do is decide how you are going to draw over the current contour. If you select random, it will randomly select one of this effects.
You can loop as long as you want through the image and very cool effects come out (most of the time).
Many iterations where applied on the color selection and on how the mesh and particles should look and move. Also the colors. Initially I was thinking on using a particular color pallet for all the virtual objects (triangles, particles etc) but it looked soo busy that i decided to keep the same pallet as the image.
The code can be found here: https://github.com/mariale888/Creative_Canvas
I was looking at some very cool stuff done with voroni cells and decided to give them a try.
I wanted my object to be as closed to the user as possible, so i decided to allow the user to create it by drawing a shape with the mouse. After the shape was draws, a voroni cell pattern will simulate the shape the user did.
To be able to manipulate it, I added three main constraints: width, height and number of subdivisions. The interface has an UI that allows the user to easily change this parameters to construct their shape.
You can rest it at any point and re draw another line or keep changing the parameters of the objects. I called it Voro Snake because I realized that most of the shapes end up looking like a very abstract snake 🙂
The code can be found here: https://github.com/mariale888/AbstractParametric