Beat Rush Hour App

app_1

We developed an Android App, “BeatRush”, as our solution to the problem of Rush Hour.

The BeatRush App System has the following components:

  1. Database
  2. A* Engine
  3. Android Application

The system uses the following input:

  1. Pittsburgh city OSM XML data
  2. User input – Destination location, current location (obtained from User’s phone GPS), times of arrival and departure, time-windows around these mentioned times.

app_2.png

Rush Hour – in depth

Algorithm

We use the A* algorithm along with backtracking, for searching the best path from source to destination. We want to find the path that takes the minimum time.

  • Let f(x) be the time function that we want to minimize using A*.

f(x) = g(x) + h(x), whereg(x) = distance(A,B)/TrafficSpeed(A,B) and the heuristic

h(x) = distance(B, GoalNode)/SpeedLimit

distance(A,B) = the great circle distance between point A and the neighbor point B,

TrafficSpeed(A,B) = speed of traffic between points A and B, obtained from Here Traffic API.

SpeedLimit is in kmph, and is obtained from the OSM database.

The great circle distance is the shortest distance between two points on the surface of a sphere.

It is given by the following formula:

where Φ1, 𝝺1 and Φ2, 𝝺2  are the latitude and longitude of points 1 and 2 and ΔΦ and Δ𝝺 are their absolute differences. Δ𝞼 is the central angle between them.

  • We start searching for the path at the start of the departure window. This is important because we will need the approximate traffic data at that particular time, which we solicit from the API. We keep searching for a path which starts in the user’s departure window and ends within the user’s arrival window. We have a lookup table, which has the traffic information for the whole day, at one minute intervals. We will need this to get the traffic information at different points in time.
  • h(x) is admissible as the great circle distance (analogous to Euclidean distance) will always be smaller than actual path distance and the speed limit will always be greater than the actual speed of the vehicle.
  • As mentioned above, we sometimes ran into dead-ends while searching for the best path. In such cases, we backtrack on the path which has been searched until then, until we are able to provide an alternate path to search on. Sometimes, the dead-ends are too far along a way, and backtracking will take a lot of time. Case in point – by mistake the search got onto a railway track, and the dead-end was encountered a long way away from the beginning of that track, where we could have gone on an alternate route. Backtracking took a lot of time, time we could have saved. In such cases, we tag the beginning node of the railway track with some tag. Then while searching for the path, we check for this tag and don’t go there if this tag is present.

Fighting Traffic

traffic.png

What if we could fight the terrible traffic we all have to encounter during rush hour? What if there was a better way to move and travel that can make our time more efficient?

Traffic is an issue that has been impacting society for a long time now. Rush hour is a part of the day during which traffic congestion on roads and crowding on public transport is at its highest. Rush hour is a phenomenon that usually happens twice a day – once in the morning and once in the evening, the times during which people commute the most. We propose a solution to the problem of Rush Hour by suggesting the most time-efficient route for the user to take between two locations.

Artificial intelligent agents in the direct control class usually focus on managing estimation of traffic in a particular road network for controlling traffic lights, analyzing the traffic demand for a particular route, and on qualitative predictions of route demands (rush hour) and bottlenecks that could form. On the other hand, the indirect control class agents usually focus  on climate predictions and using the direct control agents information to alert the drivers.

Palmistry Ball

Palmistry Ball is a maze palm reader that lives in your hand through projection mapping. In every level you will need to roll the ball into the white finger. Only one finger will be white every turn, while the others will control your four main hand lines: “heart”, “head”, “life”, and “fate”. These lines will act as walls in your quest to take the ball to the white finger. Succeed in taking it and proceed to the next level of your Palmistry reading. You need to be careful with your hand movements. If the ball falls off your hand, you will have to re start that level.

palmistryball

After the show I also did a second way of visualizing the game by removing the hand and just showing the white finger, the ball and the lines. After play-testing both types of visualization i learned that both are very interesting to play with and create a different experience.

clear_hand1

I got inspired to do this project by Golan. He initially pitched me the idea of doing a game on the hand. After seen how cool projection mapping in the hand looked, I designed 3 different games and chose the best suitable for the project.  I learned a lot about mapping. I have always wanted to do something with projection mapping but never had the chance. This project was perfect to get my hands into mapping and I realize how fun and the amount of possibilities it has in entertainment.

Hand Projection Mapping

After finishing my initial capstone prototype I wasn’t to happy with it because even though it worked very nice, it was to boring for my taste. So i decided to spin it off into something crazier.

A game that will live in your hand through projection mapping.

 

good_calibration_1

 

Calibrating the hand has two main stages. Aggregating the right amount of points and then shifting the camera. To aggregate the points needed to successfully map your hand with the projector we need to save the 3D point of the leap motions index finger and the 2D point of where the mouse thinks that index finger is. In the image bellow you can see the calibration process.

calibration_process

This image shows what happens when you don’t aggregate the points correctly. The image will be offset from your hand.

bad_calibration

 

In contrast, this images show how it looks in OF when  you have successfully calibrated the hand projections.

good_calibration_2 good_calibration_3

Women’s USOPEN – visualization

USOPEN Women’s Twitter Popularity -webGL/Three.js

vis_update3

vis_update2
Live App: http://womenstennis.fusion-sky.com/

I am a big tennis fan and decided to see if the tweets a player gets in a game reflects in the persons performance. In other words, I wanted to see if the fans were tweeting players hopes up (or down) and predicting the outcome. many interesting patterns were found, which made me very happy! 🙂

I initially tried to use Tamboo but couldn’t use it because Twitter API only allows you to get tweets that are 30 days old. Given that the USOPEN had been a couple of months back I did a parser to parse “Topsy”. After finishing my parser, I had to parse the data multiple times to get all (or almost all) the tweets one player got during the day of her game. It took a  long time…

After aggregating the data, I cleaned it up using some text analysis libraries.  (Total around ~15,000 tweets) Once everything was done, or as done as it was going to get thanks to lack of time, I started visualizing it.

I decided to use webGL and Three.JS because I really wanted to learn it.

I made a bracket time visualization with the winners in the right in shades of green, and the losers to the left in shades of red.  The size of the circle represents the amount of tweets that player got in that game.  If you hover over the player you get to see more information. The orange circle represents the amount of positive tweets and the purple the amount of negative tweets. The rest are neutral. Apart from the extra information; all the games of that player glow up for the user to see the success of that player.

You can still navigate the interactive visualization with the arrow keys and the mouse (zoom, etc.)

Capstone Proposal

Magic Mirror:

Try your outfit without having to change your cloth and share it with others for feedback!

My capstone proposal consists on creating a magic mirror that will help the user try their favorite cloth in different colors in real time. This way, you can see what shirt or sweater looks better on you.  The player will also be able to “grab” the image in the mirror and “put” it in their phone to share it with friends. The world is reactive if we let it be magical.

Mobile phones bring computing power to immobile objects. With this in mind, I decided to incorporate our phones in the experience to bring the mirror even more alive. I got inspired on the video below and really wanted to do something similar. From here I got the idea of grabbing the image in the mirror and “put” it on the phone to share it with others:

Research: 

There are some people also trying to bring mirrors to live. For example Neiman Marcus’ Digital Mirror record what you currently are wearing for later to compare it with a new outfit. Another example is Rebecca Minkoff First Interactive Store. Users can select their cloths in a giant touchscreen mirror. Then the cloth will be taken to a fitting room and when it is ready the user will get a text message. Inside the fitting room the user has access again to the touchscreen mirror. Here they can ask for other sizes etc. and share their findings through social network.

Even though these two ideas are very good, I wanted to bring a more magical experience to live. Magic mirror will use these two examples as case studies and expand on what works and what doesn’t.  It will be real-time, and it will allow the user to share it with others through the phone gesture.

The project will be broken into 4 main stages:

  • Stage 1:
    • Successfully change color of clothing in real time with the use of the kinect.
  • Stage 2:
    • Successfully “grab” the current image and send it to your phone to share it.
  • Stage 3:
    • Make the magic mirror a more interesting object by using a 3D mesh and projection mapping.
  • Stage 4:
    • Polish

 

References:

https://www.youtube.com/watch?v=eYveEdhTgBs

http://www.digitalsignagetoday.com/videos/rebecca-minkoff-debuts-first-interactive-store/

http://www.engadget.com/2015/01/13/neiman-marcus-memory-mirror/