Image Captioning through Cognitive IOT and Machine-Learning Approaches

Main Article Content

Tarun Jaiswal, et. al.

Abstract

Text generation is the technique of producing a textual explanation of an image. Both natural-language processing (NLP) and cognitive science (CS) techniques are used to create the captions of images. Subtitles and descriptions are both texts shown on a video that delivers extra or interpretive details for observers who are deaf and hard of hearing or require extra clues than just the sound. Many times, the shown text comprises an interpretation or conversion of the pronounced language in the video. Other usages of text-generation have been originated based on the requirements of various spectators. For instance, the caption for the deaf and people who have severe hearing impairments contain an explanation of other acoustic details that audiences with hearing problems might miss, such as the description of the music, proof that the narrator is now offscreen, etc. Caption-generation is a very inspiring Cognitive Internet of things (CIoT) and artificial intelligence (AI) task where the description of text must be originated from a given image. It requires the approach of computer vision to recognize image details or content and a language model from the NLP region to translate the image interpretation into the words in the right order. In this survey paper, we describe the comprehensive overview of prevalent deep-learning based text generation methods. Besides this, we describe the various datasets and the famous evaluation metrics used in the deep-learning based automatic text generation.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
et. al., T. J. . (2021). Image Captioning through Cognitive IOT and Machine-Learning Approaches . Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(9), 333–351. Retrieved from https://turcomat.org/index.php/turkbilmat/article/view/3077
Section
Articles