A system, developed by a digital imaging student Samuel Cox employing Microsoft’s Kinect Motion, lets users to first choose an image (in gigapixels), to scroll, zoom and navigate through them by projecting them on a large screen only using hand gestures. Later, the specific area selected in the photographed may even be printed.
Samuel Cox, a a masters student in digital imaging at Lincoln University said that all he wanted to do was get people immersed in images. He said that when we went to a photo gallery, what we get is a passive experience as we never really interact with our surroundings. While the original concept is meant to be used as an art installation, the technology could prove out to be highly beneficial for navigating elaborate astronomical images and microscopic slides from biomedical research.
Present technology for creating gigapixel images, those having one billion (109) pixels or more, generally demands creating mosaics of a large number of high-resolution digital photographs. What Cox did was- he used a 16 megapixel DSLR camera with a long lens zoom assembled on a robotic tripod setup that can accept grid-coordinate inputs. Then he took about 250-350 images per scene, overlapping them by about 30 per cent, which takes nearly 45 minutes. Then he merged these images in a post-production software to develop one large image.
Although many such images are available in panoramic form on the internet, they just use basic interface with mouse functions to navigate. Cox states that this does not reveal full information within the image. He suggested the use of the motion capture abilities of Microsoft’s Kinect which was introduced to give people a better gaming experience to be employed for pragmatic, real-world applications.
Cox indicates that gigapixel images are still inaccessible to hobbyists, but many cameras now have panorama features in them, which merges several images together, imparting a navigational approach. One day gigapixel images will become very normal, he says, people will opt for cameras that are in gigapixels, not megapixels. We are waiting for that day.
Samuel Cox, a a masters student in digital imaging at Lincoln University said that all he wanted to do was get people immersed in images. He said that when we went to a photo gallery, what we get is a passive experience as we never really interact with our surroundings. While the original concept is meant to be used as an art installation, the technology could prove out to be highly beneficial for navigating elaborate astronomical images and microscopic slides from biomedical research.
Present technology for creating gigapixel images, those having one billion (109) pixels or more, generally demands creating mosaics of a large number of high-resolution digital photographs. What Cox did was- he used a 16 megapixel DSLR camera with a long lens zoom assembled on a robotic tripod setup that can accept grid-coordinate inputs. Then he took about 250-350 images per scene, overlapping them by about 30 per cent, which takes nearly 45 minutes. Then he merged these images in a post-production software to develop one large image.
Although many such images are available in panoramic form on the internet, they just use basic interface with mouse functions to navigate. Cox states that this does not reveal full information within the image. He suggested the use of the motion capture abilities of Microsoft’s Kinect which was introduced to give people a better gaming experience to be employed for pragmatic, real-world applications.
Cox indicates that gigapixel images are still inaccessible to hobbyists, but many cameras now have panorama features in them, which merges several images together, imparting a navigational approach. One day gigapixel images will become very normal, he says, people will opt for cameras that are in gigapixels, not megapixels. We are waiting for that day.
No comments:
Post a Comment