A Video Based Human Detection and Activity Recognition – A Deep Learning Approach
Main Article Content
Abstract
Humanactiondetection andidentificationhasawiderangeofapplications,suchasvideostorageandretrieval,
intelligentvideosurveillance andenvironmentalhomemonitoring,intelligenthuman–machine interfaces and identity recognition which targets many research topics in computer perception, including human detection in video, human pose estimation, human tracking, and analysis and understanding of time series data.The Human Activity Recognition System (HARS) tries to classify activities based on a series of observations of many subjects' actions and a diversity of environmental variables.The purpose of this research work is to first investigate and compare the accuracy of various HARS for different human actions shown in videos, and then to offer a superior solution. In this paper 6 categories of human activities (jogging, hand waving, walking, running, and handclapping, and boxing) have been recognized with a mean average precision (mAP) of 79.33% at the frame-based and 84.4% at the image-based measurement on the HAR datasets.Extensive experiments on dataset shows that the suggested approach outperforms the current state-of-the-art in action recognition.
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.