Gradient boosting trees (GBT)

JV Jarne Verhaeghe
SD Sofie A. M. Dhaese
TC Thomas De Corte
DM David Vander Mijnsbrugge
HA Heleen Aardema
JZ Jan G. Zijlstra
AV Alain G. Verstraete
VS Veronique Stove
PC Pieter Colin
FO Femke Ongenae
JW Jan J. De Waele
SH Sofie Van Hoecke
request Request a Protocol
ask Ask a question
Favorite

Four hyperparameters of the GBT ensemble were optimized using cross-validation: tree depth, leaf regularization, border count, and the quantile coverage p. The final hyperparameters of all GBT new sub-models were chosen to be 4, 1, 250, and 0.80, respectively. For the GBT prev model, they were 3, 4, 50, and 0.82, respectively. Therefore, the loss function of the new and prev sub-model responsible for the regression output was Quantile:alpha=0.5. For the new upper and lower quantile models the loss functions were Quantile:alpha=0.90 and Quantile:alpha=0.10 respectively, while they were Quantile:alpha=0.91 and Quantile:alpha=0.09 for the prev upper and lower quantile models.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A