Interpretation
Sleep Disordered Breathing
Analysis of nocturnal paediatric oximetry uses machine learning (ML). If Regression is selected, a predictive model is used to predict the apnoea-hypopnea index (AHI). If Classification is selected, a risk classification and probability are computed.
Regression
Predicted AHI
If Regression is selected, a trained model produces a point estimate of the AHI. Measures of uncertainty are provided through a Laplace distribution that is computed and embedded within the model shortly after training. This is invoked during prediction and provides a confidence interval around the AHI point-estimate provided by the model. Laplace distributions are robust and conservative.
Classification
Score
If Classification is selected, a binary classifier produces either a unitless or probabilistic output, depending on the type of model. For non-probabilistic models, the output is used to compute an accompanying calibrated probability.
Calibrated Probability
The reporting of a calibrated probability allows clinicians to relate the Score to the probability of the real-world outcome. For models like Support Vector Classifiers (SVC) which produce a non-probabilistic output, the calibrated probability of an AHI of ≥5 is calculated using a Ridge Logistic Regression model that is trained using cross-validation data. The model is embedded within the SVC, and is invoked during prediction to provide a calibrated probability.
Model Explanations
Feature importance values or explanations attempt to describe the impact of individual features on the model output. Care should be taken when interpreting these values, and consideration should be given to the real world directionality of each feature when interpreting the sign of an explanation value.
Global Explanations
To estimate the overall importance of each feature to a prediction, Global Explanations are approximated using the KernelSHAP method. The coefficients of a weighted linear model provide an approximation of how changes in each feature influences the model prediction.
Local Explanations
To understand model behaviour near a specific prediction, Local Interpretable Model-agnostic Explanations (LIME) and a K-LASSO approach are used. Lasso regression is applied to select K (where K equals 5) features that are most relevant locally. A linear model is trained on perturbed samples around the original data, weighted by their proximity. The coefficients of this linear model form Local Explanations, providing a sparse estimate of how the prediction changes with small, localised variations in the selected features.