Main Article Content
Bloom’s developed taxonomy level for the questions’ to identify the cognitive level of the student. An automation process is required to do the process in a faster manner. Natural Language Processing (NLP) is one of the areas of programming where software processes natural language. Sentiment analysis, language translation, fake news detection, and grammatical error detection are only a few of the applications.Tokenization is the process of splitting down large blocks of text into smaller ones. Tokenization divides the original text into tokens, which are words and sentences. These tokens aid in the comprehension of the context or the development of the NLP model. Through evaluating the series of sentences, tokenization aids in reading the context of the language. Tokenization can be done using a variety of techniques and databases. Any of the libraries that can be used to complete the challenge are NLTK, Gensim, and Keras.This paper provides the solution to the manual process in the identification of bloom’s taxonomy levels.