Foot Recognition Using Deep Learning for Knee Rehabilitation

The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.




References:
S. Anwer, A. Alghadir. “Effect of isometric quadriceps exercise on muscle strength, pain, and function in patients with knee osteoarthritis: a randomized controlled study”. J Phys Ther Sci. 2014;26(5):745-8.
[2] Vincent KR, Vincent HK. Resistance exercise for knee osteoarthritis. PM R. 2012;4(5 Suppl):S45-52.
[3] E. Ceseracciu, Z. Sawacha, C. Cobelli, “Comparison of markerless and marker-based motion capture technologies through simultaneous data collection during gait: proof of concept”. PLoS One. 2014;9(3):e87640. Published 2014 Mar 4. doi:10.1371/journal.pone.0087640
[4] J. Kwak, S. Xu and B. Wood, ”Efficient data mining for local binary pattern in texture image analysis”, Expert Systems with Applications, vol. 42, no. 9, pp. 4529-4539, 2015.
[5] Ma, Y., Chen, X., Chen, G., Pedestrian detection and tracking using HOG and oriented-LBP features, Network and Parallel Computing, 2011, pp. 176-184.
[6] L. Hou, W. Wan, K. Han, R. Muhammad and M. Yang, ”Human detection and tracking over camera networks: A review,” 2016 Interna-tional Conference on Audio, Language and Image Processing (ICALIP), Shanghai, 2016, pp. 574-580. doi: 10.1109/ICALIP.2016.7846643
[7] Y. LeCun, Y. Bengio, and G. Hinton. (2015). “Deep learning. Nature”, 521(7553), 436-444. https://doi.org/10.1038/nature14539
[8] T. Ojala, M. Pietikainen and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[9] N. Dalal and B. Triggs, “Histograms of oriented gradients for human recognition,” Proc. IEEE Conf. Computer. Vision. Pattern Recognition., pp. 1-8, Jun. 2005.
[10] C. Cortes and V. Vapnik, “Support-Vector Networks,”. Machine. Learning. Vol. 20, No.3 (September 1995), pp. 273-297, 1995
[11] L. Hu, M. Huang, S. Ke and C. Tsai, “The distance function effect on k-nearest neighbor classification for medical datasets”, SpringerPlus, vol. 5, no. 1, 2016.
[12] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results”, Host.robots.ox.ac.uk, 2012. (Online). Available: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/. (Accessed: 10- OCT-2018).
[13] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge”, International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015.