A comparative study of Word Embedding Techniques to extract features from Text
Main Article Content
Abstract
Extract information from text into feature vectors is known as word embedding, which is used to represent the meaning of words into vector format. There have been no. of word embedding techniques developed that allow a computer to process natural language and compare the relationships between different words programmatically. In this paper, first, we introduce popular word embedding models and discuss desired properties of word model like similarity analysis, or the testing of words for synonymic relations, is used to compare several of these techniques to see which performs the best.
Downloads
Download data is not yet available.
Metrics
Metrics Loading ...
Article Details
How to Cite
et. al., N. K. . (2021). A comparative study of Word Embedding Techniques to extract features from Text. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(12), 3550–3557. https://doi.org/10.17762/turcomat.v12i12.8101
Issue
Section
Articles
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.