The VGG network is based on the raw pretrained VGG face network (referred to as VGG-Raw) that is publicly available, http://www.robots.ox.ac.uk/~vgg/software/vgg_face/. This network consists of 13 convolutional layers (eight more layers than AlexNet) and three FCLs (same as AlexNet). The dataset used for training this network consisted of more than 2.5 million face images, where each image is labeled with one of 2622 person identities. The details of the network architecture, its training dataset, and training procedure can be found in (31).

Similar to the EIG network, the VGG network is obtained by fine-tuning this pretrained VGG-Raw network on the relevant image sets. For our FIV experiments, we used the same bootstrapped training dataset of FIV images as described above. We replaced VGG-Raw’s top 2622-way fully connected classification layer [i.e., its third FCL (TFCL)] with a 25-way classification layer for the FIV identities. Training of VGG started from their pretrained values in VGG-Raw, except this final layer, which was initialized with random weights. We trained that new classification layer (TFCL) and fine-tuned the weights in TCL, FFCL, and SFCL using SGD to minimize a cross-entropy loss.

For our FIV-S experiments, we replaced the final classification layer in the pretrained VGG-Raw network with a 500-way classification layer. To train this network, we obtained a new dataset with the person identities and training images coming from the generative model. We first randomly sampled 500 identities as pairs of shapes and textures from Pr(S, TF). We then rendered each identity using 400 viewing conditions randomly drawn from Pr(L, P), identical to EIG’s training dataset. This procedure gave us a total of 200,000 images and their corresponding identity labels (from 1 to 500). In line with the training of the VGG-Raw network, the VGG network as well as the EIG network used two standard data augmentation methods including making an image grayscale with a low probability (0.1) and mirror reflecting an image with probability 0.5. As for our FIV experiments, we initialized the weights of the VGG network using the weights of the pretrained VGG-Raw network except for its classification layer, which was initialized using random weights. We then fine-tuned the weights associated with its TCL, FFCL, and SFCL and trained its classification layer using SGD to minimize a cross-entropy loss. We used a learning rate of 0.0001 with minibatches of size 20 images.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.