Rush Hour – in depth

Algorithm

We use the A* algorithm along with backtracking, for searching the best path from source to destination. We want to find the path that takes the minimum time.

  • Let f(x) be the time function that we want to minimize using A*.

f(x) = g(x) + h(x), whereg(x) = distance(A,B)/TrafficSpeed(A,B) and the heuristic

h(x) = distance(B, GoalNode)/SpeedLimit

distance(A,B) = the great circle distance between point A and the neighbor point B,

TrafficSpeed(A,B) = speed of traffic between points A and B, obtained from Here Traffic API.

SpeedLimit is in kmph, and is obtained from the OSM database.

The great circle distance is the shortest distance between two points on the surface of a sphere.

It is given by the following formula:

where Φ1, 𝝺1 and Φ2, 𝝺2  are the latitude and longitude of points 1 and 2 and ΔΦ and Δ𝝺 are their absolute differences. Δ𝞼 is the central angle between them.

  • We start searching for the path at the start of the departure window. This is important because we will need the approximate traffic data at that particular time, which we solicit from the API. We keep searching for a path which starts in the user’s departure window and ends within the user’s arrival window. We have a lookup table, which has the traffic information for the whole day, at one minute intervals. We will need this to get the traffic information at different points in time.
  • h(x) is admissible as the great circle distance (analogous to Euclidean distance) will always be smaller than actual path distance and the speed limit will always be greater than the actual speed of the vehicle.
  • As mentioned above, we sometimes ran into dead-ends while searching for the best path. In such cases, we backtrack on the path which has been searched until then, until we are able to provide an alternate path to search on. Sometimes, the dead-ends are too far along a way, and backtracking will take a lot of time. Case in point – by mistake the search got onto a railway track, and the dead-end was encountered a long way away from the beginning of that track, where we could have gone on an alternate route. Backtracking took a lot of time, time we could have saved. In such cases, we tag the beginning node of the railway track with some tag. Then while searching for the path, we check for this tag and don’t go there if this tag is present.
Advertisements

Fighting Traffic

traffic.png

What if we could fight the terrible traffic we all have to encounter during rush hour? What if there was a better way to move and travel that can make our time more efficient?

Traffic is an issue that has been impacting society for a long time now. Rush hour is a part of the day during which traffic congestion on roads and crowding on public transport is at its highest. Rush hour is a phenomenon that usually happens twice a day – once in the morning and once in the evening, the times during which people commute the most. We propose a solution to the problem of Rush Hour by suggesting the most time-efficient route for the user to take between two locations.

Artificial intelligent agents in the direct control class usually focus on managing estimation of traffic in a particular road network for controlling traffic lights, analyzing the traffic demand for a particular route, and on qualitative predictions of route demands (rush hour) and bottlenecks that could form. On the other hand, the indirect control class agents usually focus  on climate predictions and using the direct control agents information to alert the drivers.

Final Project Research

Cubepix Demo Test – by Xavi’s Lab

I am interested in doing something with projection mapping and mixing hardware like Arduino. I really liked this project because is a very good combination of both hardware use and projection mapping. I dont like the fact that the installation is supper loud. I know it is thanks to the motors, but If I end up using both projection mapping and Arduinos, I would try to minimize the amount of motors it had to keep the volume as low as possible. Apart from that i really like the effect the boxes do when they are rotating! It looks very cool. I also liked how they added the kinect as an interactive input. I would definitely add one too. I really think that some experiences like this the kinect adds a very special touch.

FLOW 1 | KINECT PROJECTOR DANCE

This project is more focused on kinect and projection mapping. As mentioned earlier, I feel when it comes to tracking, the kinect makes it easier to create interesting things with the depth camera. I really like all the effects the dances create. It looks like they are the ones disturbing the environment and i really like how that looks. I also liked the minimal color pallet used. I felt like the high contrast makes a stronger connection between he dancers and the disruption they are creating in the background and the mapping. I have seen many similar effects to this one, so I would definitely try to innovate the canvas, maybe by adding a real static object that will add to the canvas or combine it with LED or some type of sensor that would alter the canvas, making it less flat

Motion Capture & VR

I was doing some research for my ETC semester project which involves a motion capture when i came across the two projects I am going to discus.

Map Visibility Estimation For Marge-Scale Dynamic 3D Reconstruction:

This project is tracking movement and then generating the movement path of the objects dynamically. Markers are attached to the objects that what to be tracked so the motion capture cameras can see them and human joints are tracked automatically similar to how the Microsoft Kinect does it. It is a research project here at CMU with its main focus on creating more accurate motion detection by optimal camera detection. In other words, selecting the right cameras for each point (in a very small nutshell).  It was done byHanbyul Joo, Hyun Soo Park, and Yaser Sheikh. I founded this project very inspiring because all the raw data of movement creates beautiful color patterns and shapes and because it solved almost all the issues the Kinect has to encounter when tracking humans. I feel that a very cool installation could be created with this type of technology because the entire human body is being tracked in a 3D cube/space. this would allow for a completely immersive tracking experience!

After finding this project, i decided to try to see what else similar to this is out there and I found this:

NuFormer – Virtual Reality-Video Projection:

This project is trying to combine virtual reality with motion capture to fully engage the user in the experience. apart from this, it generates a projection of the user in the virtual space to show to the audience what he/she is seeing and experiencing. This type of experiences are being explored to look for more possibilities of how far we can push VR and full our brain. This one was made by NuFormer. I really liked the concept of combining virtual reality with motion capture, this is probably what we are going to end up doing in my ETC project, but I feel this project was just a proof of concept. I didn’t find the experience that engaging, yes the art is nice, but with so that power, something more creative would have been better. Something that would really make the user be in the edge!