It’s a 3-D world

Scroll to see more
Iago Fernández Pereira
In September 2010 made his appearance on the market the low cost 3D sensor Kinect from Microsoft. Although it was born as a free gaming and entertainment controller for the Xbox console, soon it was noticed its potential in various fields such as health, education or marketing.
3D images generated by Kinect facilitate the detection and tracking of user’s body movements. This allows user to control console without controllers, but also view and manipulate medical image without physical contact or detect falls of elderly patients in a hospital. But that’s not all, the advantages of this sensor are also present without having to analyze user’s gestures, in projects such as obstacle detection for blind people or object scanning for 3D printing.
Based on the success of this device, new 3D sensors based on the same technology have emerged, capable of creating point clouds in three dimensions of the scene and get the distance between the objects in the field of view and the camera. Among the advantages of this technology we may highlight robustness against lighting changes, operating even in complete darkness or obtaining actual physical measures of the scene objects without prior calibration.
These advantages have not gone unnoticed by the major companies in the technology sector such as Google or Apple. Google’s Project Tango intends to integrate these 3D sensors in mobile devices, indeed they are providing a prototype tablet for developers. Moreover Apple has bought Israeli company PrimeSense, possibly to include this technology in the future on their own devices.
In gradiant, we have several years of experience in the processing of 3D images obtained from low cost depth sensors. We work on projects related to security and trade, health and even livestock. Our Automatic People Counter uses this new technology to obtain a counting accuracy of 95% even in crowded scenes. In the HOLOS project, we use a depth sensor to monitor the agitation of bedridden patients noninvasively. In the TECOOPAGA project we analyze the quality of beef by extracting features related to the animal’s volume.

We will keep working in order to take advantage of this new technology, which will help us to better perceive the world around us, a world in three dimensions.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.