Deep Learning Based Fall Detection Using Simplified Human Posture

Falls are one of the major causes of injury and death
among elderly people aged 65 and above. A support system to
identify such kind of abnormal activities have become extremely
important with the increase in ageing population. Pose estimation
is a challenging task and to add more to this, it is even more
challenging when pose estimations are performed on challenging
poses that may occur during fall. Location of the body provides a
clue where the person is at the time of fall. This paper presents
a vision-based tracking strategy where available joints are grouped
into three different feature points depending upon the section they are
located in the body. The three feature points derived from different
joints combinations represents the upper region or head region,
mid-region or torso and lower region or leg region. Tracking is always
challenging when a motion is involved. Hence the idea is to locate
the regions in the body in every frame and consider it as the tracking
strategy. Grouping these joints can be beneficial to achieve a stable
region for tracking. The location of the body parts provides a crucial
information to distinguish normal activities from falls.




References:
[1] M. Kangas, I. Vikman, J. Wiklander, P. Lindgren, L. Nyberg, and
T. J¨ams¨a, “Sensitivity and specificity of fall detection in people aged
40 years and over,” Gait & posture, vol. 29, no. 4, pp. 571–574, 2009.
[2] W. C. H. A. a Fall, “Important facts about falls,” 2016.
[3] U. Age, “Later life in the united kingdom,” Age UK Factsheet, 2018.
[4] H. Nait-Charif and S. J. McKenna, “Activity summarisation and fall
detection in a supportive home environment,” in Pattern Recognition,
2004. ICPR 2004. Proceedings of the 17th International Conference on,
vol. 4. IEEE, 2004, pp. 323–326.
[5] D.-S. Jang, S.-W. Jang, and H.-I. Choi, “2d human body tracking
with structural kalman filter,” Pattern Recognition, vol. 35, no. 10, pp.
2041–2049, 2002.
[6] J.-L. Chua, Y. C. Chang, and W. K. Lim, “A simple vision-based fall
detection technique for indoor video surveillance,” Signal, Image and
Video Processing, vol. 9, no. 3, pp. 623–633, 2015.
[7] Z.-P. Bian, J. Hou, L.-P. Chau, and N. Magnenat-Thalmann, “Fall
detection based on body part tracking using a depth camera,” IEEE
journal of biomedical and health informatics, vol. 19, no. 2, pp.
430–439, 2015.
[8] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Monocular
3d head tracking to detect falls of elderly people,” in Engineering
in Medicine and Biology Society, 2006. EMBS’06. 28th Annual
International Conference of the IEEE. IEEE, 2006, pp. 6384–6387.
[9] M. Yu, S. M. Naqvi, and J. Chambers, “Fall detection in the elderly by
head tracking,” in Statistical Signal Processing, 2009. SSP’09. IEEE/SP
15th Workshop on. IEEE, 2009, pp. 357–360.
[10] K. de Miguel, A. Brunete, M. Hernando, and E. Gambao, “Home
camera-based fall detection system for the elderly,” Sensors, vol. 17,
no. 12, p. 2864, 2017.
[11] A. Doulamis and N. Doulamis, “Adaptive deep learning for a
vision-based fall detection,” in Proceedings of the 11th PErvasive
Technologies Related to Assistive Environments Conference. ACM,
2018, pp. 558–565.
[12] A. N´u˜nez-Marcos, G. Azkune, and I. Arganda-Carreras, “Vision-based
fall detection with convolutional neural networks,” Wireless
Communications and Mobile Computing, vol. 2017, 2017.
[13] A. Shojaei-Hashemi, P. Nasiopoulos, J. J. Little, and M. T. Pourazad,
“Video-based human fall detection in smart homes using deep learning,”
in Circuits and Systems (ISCAS), 2018 IEEE International Symposium
on. IEEE, 2018, pp. 1–5.
[14] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d
pose estimation using part affinity fields,” in CVPR, 2017.
[15] R. A. G¨uler, N. Neverova, and I. Kokkinos, “Densepose: Dense human
pose estimation in the wild,” arXiv preprint arXiv:1802.00434, 2018.
[16] K. Adhikari, “Fall detection dataset,” 2017, last accessed 10 December
2018. [Online]. Available: http://www.falldataset.com/
[17] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are
features in deep neural networks?” in Advances in neural information
processing systems, 2014, pp. 3320–3328.
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in
context,” in European conference on computer vision. Springer, 2014,
pp. 740–755.
[19] S. Johnson and M. Everingham, “Clustered pose and nonlinear
appearance models for human pose estimation,” in Proceedings of the
British Machine Vision Conference, 2010, doi:10.5244/C.24.12.