request Request a Protocol
ask Ask a question
Favorite

SE Block was proposed by Hu et al. (2018). The network learns the weight according to the loss, which causes the effective feature maps’ weight to increase, and the ineffective or small effect feature maps’ weight to decrease, so that the model achieves better results. SE Block consists of two parts, Squeeze and Excitation. The Squeeze part uses the global pooling to integrate the input feature map of size C×H×W into the feature descriptor of size C×1×1 as in:

The Excitation part contains two fully connected layers using the sigmoid activation function. The fully connected layer fuses all the input feature information, and the sigmoid function maps the input to 0–1, which can be represented as:

Where s is the global descriptor obtained by the Squeeze part, δ is the relu function, and W1 and W2 are the two fully connected layers. Finally, the weights of the individual channels of the input feature map E obtained are merged with the original features:

As a general module, SE Block can be integrated into existing CNNs to add an attention mechanism to the network by inferring attention maps in the channel.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A