Autonomous Road Damage Detection using Unmanned Aerial Vehicle Images and YOLO V8 Methods
Main Article Content
Abstract
Using photos from Unmanned Aerial Vehicles (UAVs) and deep learning algorithms, this research provides a revolutionary automated road damage identification method. In order to provide a secure and long-lasting transportation system, road infrastructure maintenance is essential. On the other hand, gathering road damage data by hand may be dangerous and labor-intensive. Therefore, we suggest using artificial intelligence (AI) and unmanned aerial vehicles (UAVs) to greatly increase the effectiveness and precision of road damage identification. For object recognition and localisation in UAV photos, our suggested method makes use of three algorithms: YOLOv4, YOLOv5, and YOLOv7. We used a mix of a Spanish roadway dataset and the Chinese RDD2022 dataset for training and testing these methods. Our method obtains 59.9% average precision (mAP@.5) for the YOLOv5 versions, 65.70% mAP@.5 when using the YOLOv5 version using the Transformers Prediction the Head, or 73.20% mAP@.5 for that YOLOv7 version, testing results show the effectiveness of our methodology. These findings open the door for further study in this area and show the possibilities of employing deep learning and UAVs for automatic road damage identification.
Downloads
Metrics
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
References
H. S. S. Blas, A. C. Balea, A. S. Mendes, L. A. Silva, and G. V. González, ‘‘A platform for swimming pool detection and legal verification using a multi-agent system and remote image sensing,’’ Int. J. Interact. Multimedia Artif. Intell., vol. 2023, pp. 1–13, Jan. 2023.
V. J. Hodge, R. Hawkins, and R. Alexander, ‘‘Deep reinforcement learning for drone navigation using sensor data,’’ Neural Comput. Appl., vol. 33, no. 6, pp. 2015–2033, Jun. 2020, doi: 10.1007/s00521-020-05097-x.
A. Safonova, Y. Hamad, A. Alekhina, and D. Kaplun, ‘‘Detection of Norway spruce trees (Picea abies) infested by bark beetle in UAV images using YOLOs architectures,’’ IEEE Access, vol. 10, pp. 10384–10392, 2022.
D. Gallacher, ‘‘Drones to manage the urban environment: Risks, rewards, alternatives,’’ J. Unmanned Vehicle Syst., vol. 4, no. 2, pp. 115–124, Jun. 2016.
L. A. Silva, A. S. Mendes, H. S. S. Blas, L. C. Bastos, A. L. Gonçalves, and A. F. de Moraes, ‘‘Active actions in the extraction of urban objects for information quality and knowledge recommendation with machine learning,’’ Sensors, vol. 23, no. 1, p. 138, Dec. 2022, doi: 10.3390/ s23010138.
L. Melendy, S. C. Hagen, F. B. Sullivan, T. R. H. Pearson, S. M. Walker, P. Ellis, A. K. Sambodo, O. Roswintiarti, M. A. Hanson, A. W. Klassen, M. W. Palace, B. H. Braswell, and G. M. Delgado, ‘‘Automated method for measuring the extent of selective logging damage with airborne LiDAR data,’’ ISPRS J. Photogramm. Remote Sens., vol. 139, pp. 228–240, May 2018, doi: 10.1016/j.isprsjprs.2018.02.022.
L. A. Silva, H. S. S. Blas, D. P. García, A. S. Mendes, and G. V. González, ‘‘An architectural multi-agent system for a pavement monitoring system with pothole recognition in UAV images,’’ Sensors, vol. 20, no. 21, p. 6205, Oct. 2020, doi: 10.3390/s20216205.
M. Guerrieri and G. Parla, ‘‘Flexible and stone pavements distress detection and measurement by deep learning and low-cost detection devices,’’ Eng. Failure Anal., vol. 141, Nov. 2022, Art. no. 106714, doi: 10.1016/j.engfailanal.2022.106714.
D. Jeong, ‘‘Road damage detection using YOLO with smartphone images,’’ in Proc. IEEE Int. Conf. Big Data (Big Data), Dec. 2020, pp. 5559–5562, doi: 10.1109/BIGDATA50022.2020.9377847.
M. Izadi, A. Mohammadzadeh, and A. Haghighattalab, ‘‘A new neuro-fuzzy approach for post-earthquake road damage assessment using GA and SVM classification from QuickBird satellite images,’’ J. Indian Soc. Remote Sens., vol. 45, no. 6, pp. 965–977, Mar. 2017.
Y. Bhatia, R. Rai, V. Gupta, N. Aggarwal, and A. Akula, ‘‘Convolutional neural networks based potholes detection using thermal imaging,’’ J. King Saud Univ.-Comput. Inf. Sci., vol. 34, no. 3, pp. 578–588, Mar. 2022, doi: 10.1016/j.jksuci.2019.02.004.
J. Guan, X. Yang, L. Ding, X. Cheng, V. C. Lee, and C. Jin, ‘‘Automated pixel-level pavement distress detection based on stereo vision and deep learning,’’ Automat. Constr., vol. 129, p. 103788, Sep. 2021, doi: 10.1016/j.autcon.2021.103788.
D. Arya, H. Maeda, S. K. Ghosh, D. Toshniwal, and Y. Sekimoto, ‘‘RDD2022: A multi-national image dataset for automatic road damage detection,’’ 2022, arXiv:2209.08538.
J. Redmon and A. Farhadi, ‘‘YOLO9000: Better, faster, stronger,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, 2017, pp. 6517–6525, doi: 10.1109/CVPR.2017.690.
J. Redmon and A. Farhadi. YOLOv3: An Incremental Improvement. [Online]. Available: https://pjreddie.com/yolo/