One of the more instinctive ways of knowing whether something is ‘real’ or not is to see if it interacts with our reality and how. Using this heuristic, we can tell that cartoon characters, for example, are not real because they have no visible control over and are not affected by our common real-world environments. This remains fundamentally true – even though the advent of mediated reality and the plethora of digital environments we live in today have shifted our perceptions of physical reality to a great extent. But now there’s a new research project from MIT that aims to challenge our boundaries yet again: with its new imaging technique dubbed ’Interactive Dynamic Video’ (IDV), we will be able to “reach in and touch videos,” allowing virtual and actual objects to interact and mash with each other.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have found a way of creating life-like environments without resorting to 3-D modelling, which is commonly used in making background footage interact with virtual characters. The IDV, which the school announced just yesterday, allows its users to virtually interact with objects in a video and see how they respond by simulating the expected reaction through an algorithm.
This is how it works. First, a traditional camera is used to capture in a video how physical objects in a certain scene actually move. Then, the IDV analyzes video clips and tries to simulate more rigorous movements of those objects by looking at their tiny, nearly invisible vibrations. Even a five-seconds long footage suffices for the IDV to detect certain ‘vibration modes’ at different frequencies, each representing a distinct way an object can move. The end-result is a useful dynamics model that can be used to animate an image and make objects in it ‘react’ to unknown external forces in a way that seems natural. This way you can play Pokémon GO and see how Pikachu actually interacts with your environment, jumping out of bushes and virtually moving things around. What makes the IDV quite innovative is the fact that, unlike other algorithms tracking motion in a video and magnifying it, this one can also simulate movement in unknown environments.
“If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones,” says Abe Davis, CSAIL PhD student behind the IDV, who will soon publish this project as his final dissertation.
Davis says there are many possible applications for this technique, including the more obvious ones in the entertainment industry, but also in engineering and architecture. For example, the IDV and its further refinement could reduce the cost of development for a great number of interactive virtual reality experiences and visual effects in movies. It may find another use in the field of Structural Health Monitoring by allowing engineers to simulate how a building or a bridge would respond to winds, earthquakes and other stress sources. Ultimately, the IDV could lead to new developments in other fields where computer-generated graphics and live action footage mix. One thing is certain: this technology represents an important and necessary step in taking virtual reality forward.