Improved Competitive Neural Network for Classification of Human Postures Based on Data fom RGB-D Sensors
Authors
Abstract
The cognitive goal of this paper is to assess whether marker‐less motion capture systems provide sufficient data to recognize human postures in the side view. The research goal is to develop a new posture classification method that allows for analysing human activities using data recorded by RGB‐D sensors. The method is insensi tive to recorded activity duration and gives satisfactory results for the sagittal plane. An improved competitive Neural Network (cNN) was used. The method of preprocessing the data is first discussed. Then, a method for classifying human postures is presented. Finally, classification quality using various distance metrics is assessed.
The data sets covering the selection of human activities have been created. Postures typical for these activities have been identified using the classifying neural network. The classification quality obtained using the proposed cNN network and two other popular neural networks were compared. The results confirmed the advantage of cNN network. The developed method makes it possible to recognize human postures by observing movement in the sagittal plane.