A Combined Motion-Audio School Bullying Detection Algorithm

School bullying is a common social problem, which affects children both mentally and physically, making the prevention of bullying a timeless topic all over the world. This paper proposes a method for detecting bullying in school based on activity recognition and speech emotion recognition. In this method, motion and voice data are gathered by movement sensors and a microphone, followed by extraction of a set of motion and audio features to distinguish bullying incidents from daily life events. Among extracted motion features are both time-domain and frequency-domain features, while audio features are computed with classical MFCCs. Feature selection is implemented using the wrapper approach. At the next stage, these motion and audio features are merged to form combined feature vectors for classification, and LDA is used for further dimension reduction. A BPNN is trained to recognize bullying activities and distinguish them from normal daily life activities. The authors also propose an action transition detection method to reduce computational complexity for practical use. Thus, the bullying detection algorithm will only run, when an action transition event has been detected. Simulation results show that the combined motion-audio feature vector outperforms separate motion features and acoustic features, achieving an accuracy of 82.4% and a precision of 92.2%. Moreover, with the action transition method, the computation cost can be reduced by half.

Ye Liang, Wang Peng, Wang Le, Ferdinando Hany, Seppänen Tapio, Alasaarela Esko

Publication type:
A1 Journal article – refereed

Place of publication:

Activity recognition, movement sensors, pattern recognition, school bullying, speech emotion recognition


Full citation:
Ye, L., Wang, P., Wang, L., Ferdinando, H., Seppänen, T., & Alasaarela, E. (2018). A Combined Motion-Audio School Bullying Detection Algorithm. International Journal of Pattern Recognition and Artificial Intelligence, 32(12), 1850046. https://doi.org/10.1142/s0218001418500465


Read the publication here: