Attention layers for important descriptors

SS Sangmin Seo
JC Jonghwan Choi
SP Sanghyun Park
JA Jaegyoon Ahn
request Request a Protocol
ask Ask a question
Favorite

In machine translation, the attention mechanism is mainly designed to solve the problem of long-term dependencies when the input sequence is long. When a word is predicted using a decoder, an attention mechanism puts more focus on words that are more related. In this study, we designed the attention layer to focus on more relevant descriptors. The latent representation of the complex (encoded vector; e) is input as an attention layer to calculate the contribution of each descriptor to the affinity prediction.

Encoded vector e and each row of embedding matrix Ei are calculated into query vector q, key vector ki, and value vector vi through a dense layer. Note that in this study the key vector ki and the value vector vi have the same value. The similarity between query vector q and key vector ki (0iu) is calculated using the inner product. The similarities are transformed into descriptor weights via softmax. The weighted sum of the value vector vi over the descriptor weight is used as the context vector. The context vector is input to one dense layer to generate the encoded context vector c, which is used to predict the binding affinity together with encoded vector e.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A