The detailed information on the neural network models is as follows:
1. DNN models.
The pretrained VGG-Face is proposed by Parkhi et al. (2015). It is a pretrained model for facial identity recognition. The architecture of the VGG-Face model is presented in Fig. 1A: it has 13 convolutional layers and 3 fully connected layers. The input image size of the model is 224×224×3, and its output layer contains 2622 units for facial identities.
The VGG-16 is proposed by Simonyan & Zisserman (2014), and is trained to classify 1000 object categories from the ImageNet dataset. The only structural difference between the VGG-16 and pretrained VGG-Face is that the last layer of the VGG-16 has 1000 units instead of 2622 units.
The untrained VGG-Face has the same structure as the pretrained VGG-Face. The only difference between the untrained VGG-Face and pretrained VGG-Face is that the connective weights of the untrained VGG-Face are not pre-trained, but are randomly assigned by Xavier normal initialization (https://keras.io/api/layers/initializers/).
2. Activation extraction.
In the present study, we did not train the above DNNs, but just extracted the activations of the units from the conv5-3 layer which is the last convolutional layer for further analysis. The face images we used were from the KDEF database, the NimStim database, the RaFD, and the AffectNet database.
Attached is the Python code for constructing the three DNN models using Keras and extracting activations. The weight files of the three DNNs can be downloaded at https://zenodo.org/record/5583295#.Y8P8J3ZBw7c.
Reference:
Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep Face Recognition. Procedings of the British Machine Vision Conference 2015, 41.1-41.12. https://doi.org/10.5244/C.29.41
Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. The 3rd International Conference on Learning Representations (ICLR2015). https://doi.org/10.48550/ARXIV.1409.1556
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this
article to respond.