Neural network algorithm

SW Stefanie Warnat-Herresthal
HS Hartmut Schultze
KS Krishnaprasad Lingadahalli Shastry
SM Sathyanarayanan Manamohan
SM Saikat Mukherjee
VG Vishesh Garg
RS Ravi Sarveswara
KH Kristian Händler
PP Peter Pickkers
NA N. Ahmad Aziz
SK Sofia Ktena
FT Florian Tran
MB Michael Bitzer
SO Stephan Ossowski
NC Nicolas Casadei
CH Christian Herr
DP Daniel Petersheim
UB Uta Behrends
FK Fabian Kern
TF Tobias Fehlmann
PS Philipp Schommers
CL Clara Lehmann
MA Max Augustin
JR Jan Rybniker
JA Janine Altmüller
NM Neha Mishra
JB Joana P. Bernardes
BK Benjamin Krämer
LB Lorenzo Bonaguro
JS Jonas Schulte-Schrepping
ED Elena De Domenico
CS Christian Siever
MK Michael Kraut
MD Milind Desai
BM Bruno Monnet
MS Maria Saridaki
CS Charles Martin Siegel
AD Anna Drews
MN Melanie Nuesch-Germano
HT Heidi Theis
JH Jan Heyckendorf
SS Stefan Schreiber
SK Sarah Kim-Hellmuth
JN Jacob Nattermann
DS Dirk Skowasch
IK Ingo Kurth
AK Andreas Keller
RB Robert Bals
PN Peter Nürnberg
OR Olaf Rieß
request Request a Protocol
ask Ask a question
Favorite

We leveraged a deep neural network with a sequential architecture as implemented in Keras (v 2.3.1)28. Keras is an open source software library that provides a Python interface to neural networks. The Keras API was developed with a focus on fast experimentation and is standard for deep learning researchers. The model, which was already available in Keras for R from the previous study3, has been translated from R to Python to make it compatible with the SLL (Supplementary Information). In brief, the neural network consists of one input layer, eight hidden layers and one output layer. The input layer is densely connected and consists of 256 nodes, a rectified linear unit activation function and a dropout rate of 40%. From the first to the eighth hidden layer, nodes are reduced from 1,024 to 64 nodes, and all layers contain a rectified linear unit activation function, a kernel regularization with an L2 regularization factor of 0.005 and a dropout rate of 30%. The output layer is densely connected and consists of one node and a sigmoid activation function. The model is configured for training with Adam optimization and to compute the binary cross-entropy loss between true labels and predicted labels.

The model is used for training both the individual nodes and SL. The model is trained over 100 epochs, with varying batch sizes. Batch sizes of 8, 16, 32, 64 and 128 are used, depending on the number of training samples. The full code for the model is provided on Github (https://github.com/schultzelab/swarm_learning/)

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A