Infomax

PA Paul Ardin
FP Fei Peng
MM Michael Mangan
KL Konstantinos Lagogiannis
BW Barbara Webb
request Request a Protocol
ask Ask a question
Favorite

For comparison, we also implemented the Infomax algorithm, a continuous 2-layer network, closely following the method described in [11], except with a much higher learning rate. The input layer has N = 360 units and the normalized intensities of the 36x10 image pixels are mapped row by row onto the input layer to provide the activation level of each unit, xi. The output layer also has 360 units and is fully connected to the input layer. The activation of each unit of the output layer, hi, is given by:

Where wij is the weight of the connection from the jth input unit to the ith output unit. The output yi is then given by:

Each image is learnt in turn by adjusting all the weights (initialized with random values) by:

Where η = 1.1 is the learning rate.

Subsequently the novelty of an image is given by the summed activation of the output layer:

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A