Efficient Video Super-Resolution Using Deep Convolutional Autoencoders

Main Article Content

Rohita H. Jagdale, et. al.

Abstract

Video super-resolution is the most well-known area of research in computer science. A video super-resolution technique is commonly required to recreate high-resolution video from noisy, blurry, and low-resolution video. Super-resolution is used in many applications like biomedical image processing, computer vision, and satellite image processing. This paper proposes a deep convolution auto-encoder-based video super-resolution model, which is trained with high-resolution video frames. An autoencoder is an unsupervised neural network that learns how to minimize data through design and reconstructs the loss of as little data as possible. In the test model, low-resolution frames are extracted from the low-resolution video. These low-resolution frames are then passed to the proposed architecture, which is modeled using a convolution auto-encoder. Important features of frames are extracted using multiple convolutional layers and different filters in the encoder model. High-resolution frames are reconstructed using decoder by minimizing loss function using L1 regularization with backpropagation, and weight matrices are updated with Adam optimizer. The proposed model’s efficiency is evaluated and contrasted with state-of-the-art PSNR, SSIM, BRISQUE, VIFP, and UQI techniques. The proposed autoencoder model shows excellent performance.

Article Details

Section
Articles