Assessing Activities of Daily Living from a Wearab.. (EGOVISION4HEALTH)
Assessing Activities of Daily Living from a Wearable RGB-D Camera for In-Home Health Care Applications
(EGOVISION4HEALTH)
Start date: Jul 1, 2013,
End date: Jun 30, 2016
PROJECT
FINISHED
This project aims to develop the researcher‘s career to the point where he is in a very strong position to start his own research group and secure funding to do so. This will be achieved by complementing his existing scientific knowledge to give him a broad expertise on object detection, scene understanding and activity recognition, giving him training in complementary skills such as proposal writing, communication and management, allowing him to build collaborations and making him well-known throughout his research field through publications and conference attendance.The mobility objective is to transfer back to the EU the leading scientific expertise on object and activity detection in first person views only available at the Computational Vision Group at the University of California Irvine, USA, and to build long term collaborative links from this group to the return host, at Universidad de Zaragoza, experts in visual map building for mobile robotics. After project completion, the fellow is expected not only to go on researching in this emerging field, but also to make the technology transfer to produce commercial devices.This proposal aims at investigating the alloy between egocentric vision and visual map building to automatically provide the health professionals -occupational, rehabilitation and geriatric therapists- with an assessment of their patients’ ability to manipulate objects and perform activities of daily living (ADL). Research breakthroughs are required not only in vision-based ADL recognition and mapping but also in exploiting the synergy of the combination. The research objectives are:- to introduce the use of wearable RGB-D cameras and advance existing knowledge on object detection in first-person views,- to achieve advanced scene understanding by building a long-term 3D map of the environment augmented with detected objects, and- to analyse object manipulation and evaluate ADL using a detailed 3D hand model and the a-priori scene knowledge.
Get Access to the 1st Network for European Cooperation
Log In