Kinect represents one of the first commercial forays into marker-less visual motion tracking, and its technology could become as ubiquitous as the mouse. When compared to a mouse that senses motion in 2-dimensions, Kinect uses a camera and an infra-red depth sensor to capture the 2-dimensional pixel data, as well as depth information for each pixel, effectively giving us 3-dimensional co-ordinates for each pixel. Unlike the mouse, motion tracking is only one of the applications of Kinect. Also, Kinect is not limited to a small interaction space like the mouse. Depending on the sophistication of algorithms used to consume the sensor data, Kinect has the potential to make anything it “sees” as an interaction device.
The acquisition of PrimeSense by Apple, the Kickstarter campaign of Structure Sensor, Google’s Project Tango, Leap Motion and Microsoft’s Hololens represent continued interest in 3D sensing technologies. 2D interaction devices like mouse, trackpad or joystick work well when the interface itself is 2D, but reach their limitations when used with virtual (VR) or augmented reality (AR) applications. However, 3D sensing technologies are not limited to 3D interfaces and interactions, but can also be used in interacting with 2D interfaces. Sooner or later, your laptop’s web-camera and your smartphone’s camera will be replaced with a Kinect like 3D visual sensor, opening up a huge space for using natural interactions afforded by the human body, as well as the surrounding objects.
Anticipating this future, this project explores the use of Kinect, a 3D visual sensor, to use a human body as a pointing device for a 2D interface. Read Full Report