Sunday, October 2, 2011

Kinect Project Merges Real and Virtual Worlds

New software turns the Kinect into a cheap 3-D scanner—opening up applications ranging from crime fighting to interior design.



Microsoft's Kinect Xbox controller, which lets gamers control on-screen action with their body movements, has been adapted in hundreds of interesting, useful, and occasionally bizarre ways since its release in November 2010. It's been used forrobotic vision and automated home lighting. It's helped wheelchair users with their shopping. Yet these uses could look like child's play compared to the new 3-D modeling capabilities Microsoft has developed for the Kinect.
KinectFusion, a research project that lets users generate high-quality 3-D models in real time using a standard $100 Kinect, was the star of the show at Microsoft Research's 20th anniversary event held this week at its European headquarters in Cambridge, U.K. KinectFusion also includes a realistic physics engine that allows scanned objects to be manipulated in realistic ways.
The technology allows objects, people, and entire rooms to be scanned in 3-D at a fraction of the normal cost. Imagine true-to-life avatars and objects being imported into virtual environments. Or a crime scene that can be re-created within seconds. Visualizing a new sofa in your living room and other virtual interior design tricks could become remarkably simple.
"KinectFusion is a platform that allows us to rethink the ways that computers see the world," says project leader Shahram Izadi. "We have outlined some ways it could be used, but I expect there are a whole host of future applications waiting to be discovered."
3-D scanners already exist, but none of them approach KinectFusion in ease of use and speed, and even desktop versions cost around $3,000.
"In the same way that products like Microsoft Office democratized the creation of 2-D documents, with KinectFusion anyone can create 3-D content just by picking up a Kinect and scanning something in," says team member Steve Hodges.
The first public unveiling of KinectFusion at the SIGGRAPH conference in Vancouver in August triggered huge excitement. Details of how it works will be revealed in papers presented next month at the UIST Symposium in Santa Barbara, California, and ISMAR in Basel, Switzerland.
GALLERY:
KinectFusion generated this 3-D model of research team member David Kim of Newcastle University. The surface was rendered using a technique called Phong shading.



A virtual metallic sphere that both creates shade and reflects light realistically has been added to a 3-D scene using data from Kinect's color camera.





Images moving in the foreground can be segmented and reconstructed separately from their backgrounds.





This image was generated after geometrically realistic surfaces were calculated from raw point cloud data.





The original scene, shown at bottom left, was texture-mapped in 3-D to produce the image at top left. The lighting was then adjusted to produce the image at right. The Kinect packaging is seen in the image.




No comments:

Post a Comment