An Advanced Image Captioning using combination of CNN and LSTM
Main Article Content
Abstract
The Captioning of Image now a days is gaining a lot of interest which generates an automated simple and short sentence describing the image content. Machines indeed are trained in a way that they can understand the Image content and generate captions which are almost accurate at a human level of knowledge is a very tedious and interesting task. There are various solutions used to solve this tedious task and generate simple sentences known as captions using neural network which still comes with problems such as inaccurate captions, generating captions only for the seen images, etc. In this paper, the proposed system model was able to generate more precise captions using a two staged model which consists of a combination of Deep Neural Network algorithms (Convolutional and Long Short-Term Memory). The proposed model was able to overcome the problems arise using Traditional CNN and RNN algorithms. The model is trained and tested using the Flicker8k Data set.
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.