The system proposed in this work aims to provide operationally focused clinical decision support for periods of crises in hospital. We propose a strategy in which hospital bed managers run these models from within a decision-support tool during a period of high influx of patients with infectious disease. The models would identify the patients who are most likely to be ready for discharge within the next 24 hours. A medical professional would then be assigned to screen the highest ranked patients to confirm the models’ predictions. Once confirmed, hospital bed managers would be able to proactively make discharge arrangements for that patient, to release them from the hospital as quickly as possible and to save valuable time during a critical situation in hospital. Predictions can be made for all patients currently in hospital at any time and thus can incorporate new data as it becomes available. In this study, we simulated predictions being made every 24 hours, with the initial prediction being made on the day of a patient’s admission to main hospital.
We constructed individual models for each patient admission group (planned and emergency admissions) and for each day elapsed since a patient’s admission to hospital. Elapsed times since admission t ∈ {0,1,…,7} were considered, with t = 0 representing the day a patient was admitted to the general hospital. For this study, patient stays were truncated at 7 days. Consequently, 16 different independent models, per model architecture, were developed. The sub-datasets used to train and evaluate the models are denoted Dpt and Det, respectively, with the first subscript indicating the patient admission type, and the second indicating the time elapsed in days since admission (Fig 1). For example, as shown in Fig 1, if Patient 1 is a planned patient, who arrives in hospital on 02/02/2016 and stays in hospital for 2 days, they will be included in datasets Dp0 and Dp1. If Patient 3, a different planned admission, arrives in hospital on 03/02/2016 and stays in hospital 6 days, they will also be included in datasets Dp0 and Dp1 along with Patient 1, and will additionally be included in datasets Dp2, Dp3, Dp4 and Dp5.
An illustration of how the sub-datasets were stratified. The figure contains three patients with emergency admissions who had stays that lasted at least 1 day (IDs = 1, 3, 4); at day t = 3 only two of the example patients remained (IDs = 3, 4); and on day t = 7, only one of these patients remained in hospital (ID = 4), therefore we would only be able to make an 8th day discharge prediction for this remaining patient. A comparable example is also displayed for planned admissions.
Each of the sub-datasets were balanced by down-sampling to improve the training and to allow for unbiased testing of the models, details of the down-sampling strategy can be found in Appendix A in (S1 File). The resulting size of each sub-dataset is summarised in Table 2. Diminishing quantities of data were available for increasing t, as the sub-datasets only include patients who have not been discharged after t days.
Total amount of unique patient admissions to hospital within each subset of data Dpt or Det, in which t denotes the time passed in days since the patient’s admission.
In this work, a prediction by a model that a patient will be discharged within the next 24 hours is denoted a positive prediction, whilst a prediction that a patient will not be discharged in the next 24 hours is denoted a negative prediction. Based on the probability score predicted for each patient, each proposed model ranks patients based on their likelihood of discharge.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.