The embedding layer was used to transform every word into its corresponding word vector. The rationale behind this transformation was to map each word into a high-dimensional space to represent the word with a high-dimensional word vector that contained semantically meaning of the word and enabled further comparison and computation between words. Word vectors could be pre-trained on general language corpus such as Wikipedia to capture the general semantic meaning and the relationship between words of the language. The pre-trained word vectors could provide better representation for each word and offer better semantic meaning than the randomly chosen word vectors.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.