2.3.3. Optimization procedure

ES Eric Seifert
JT Jan Tode
AP Amelie Pielen
DT Dirk Theisen-Kunde
CF Carsten Framme
JR Johann Roider
YM Yoko Miura
RB Reginald Birngruber
RB Ralf Brinkmann
request Request a Protocol
ask Ask a question
Favorite

Several settings of the algorithm described in the previous chapter were mentioned but not quantified. The values of those settings were found by an automatic optimization procedure. The automatic optimization procedure was a gradient descent process which maximized the area under the receiver operating characteristic curve (details in chapter 2.5).

This type of optimization belongs to a subcategory of machine learning procedures called supervised learning procedures. Those processes require labeled data for the optimization / training process. It is good practice to do optimization, threshold selection and testing with separate datasets, where no data in one dataset can be found in another. The difference between performance metrics of two datasets is a measure of the degree the algorithm or the threshold selection is overfitted to a dataset. The labeling / classification process, the datasets and the performance metrics are described in the following chapters.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A