Phys.org May 28, 2019
For human activity recognition (HAR) one or a few ultrasonic sensors are used to receive signals, which require many feature quantities of extraction from the received data to improve recognition accuracy. An international team of researchers (China, Japan) has developed a device based on a two-dimensional acoustic array and convolutional neural networks which uses a single feature quantity to characterize the sound of human activities and identify them. They tested their approach using a two-dimensional acoustic array with 256 receivers and four ultrasonic transmitters to gather data related to four different human activities—sitting, standing, walking and falling. The results show that the total accuracy of the activities is 97.5% for time-domain data and 100% for frequency-domain data. According to the team, acoustic systems are a better detection device than vision-based systems because of the lack of widespread acceptance of cameras due to privacy issues and low lighting or smoke can also hamper vision recognition, but sound waves are not affected by those special environmental situations…read more. Open Access TECHNICAL ARTICLEÂ
Sound waves bypass visual limitations to recognize human activity
Posted in Human activity recognition, Human behavior prediction and tagged Sensors, Surveillance system.