Multivariate Gaussian Regression (Cholesky Decomposition)¶
In this example, we model and predict all parameters of a trivariate ($Y_{D}=3$) Normal distribution. The conditional means $\mathbf{\mu}(x) \in \mathbb{R}^{D}$ and the conditional covariance matrix $\mathbf{\Sigma}(x) \in \mathbb{R}^{D \times D}$ are given as follows
$$ \mathbf{\mu}(x)=\begin{pmatrix}\mu_{1}(x) \\ \mu_{2}(x) \\ \mu_{3}(x)\end{pmatrix}, \qquad \qquad \mathbf{\Sigma}(x)= \begin{pmatrix} \sigma^{2}_{11}(x) & \rho_{1,2}(x)\sigma_{1}(x)\sigma_{2}(x) & \rho_{1,3}(x)\sigma_{1}(x)\sigma_{3}(x) \\ \rho_{2,1}(x)\sigma_{2}(x)\sigma_{1}(x) & \sigma^{2}_{22}(x) & \rho_{2,3}(x)\sigma_{2}(x)\sigma_{3}(x) \\ \rho_{3,1}(x)\sigma_{3}(x)\sigma_{1}(x) & \rho_{3,2}(x)\sigma_{3}(x)\sigma_{2}(x) & \sigma^{2}_{33}(x) \end{pmatrix} $$
To ensure positive definiteness of $\Sigma(\cdot)$, the $D(D + 1)/2$ entries of the covariance matrix must satisfy specific conditions. For the bivariate case, this can be ensured by applying exponential functions to the variances and a suitable transformation to restrict the coefficient of correlation $\rho \in [-1,1]$. However, in high-dimensional settings, where all moments are modelled as functions of covariates, ensuring positive definiteness of the covariance matrix becomes challenging, since joint restrictions for the elements are necessary. A computationally more tractable approach to ensure positive definiteness is based on the Cholesky decomposition, that uniquely decomposes the covariance matrix as follows
$$ \mathbf{\Sigma}(x) = \mathbf{L}(x) \mathbf{L}^{\prime}(x) $$
where $\mathbf{L}(\cdot) \in \mathbb{R}^{D \times D}$ is a lower triangular matrix. To ensure $\mathbf{\Sigma}(\cdot)$ to be positive definite, the $D$ diagonal elements $\ell_{ii}$ of $\mathbf{L}(\cdot)$ need to be strictly positive, whereas all $D(D −1)/2$ off diagonal elements $\ell_{ij}$ can take on any value. For the trivariate case, the Cholesky factor $\mathbf{L}(\cdot)$ is given as follows
$$ \mathbf{L}(x)= \begin{pmatrix} \exp\big(\ell_{11}(x)\big) & 0 & 0 \\ \ell_{21}(x) & \exp\big(\ell_{22}(x)\big) & 0 \\ \ell_{31}(x) & \ell_{32}(x) & \exp\big(\ell_{33}(x)\big)\\ \end{pmatrix} $$
Given the usefulness of the Cholesky decomposition, instead of estimating the entries of $\mathbf{\Sigma}(\cdot)$ directly, XGboostlss estimates the Cholesky factors $\mathbf{L}(\cdot)$ and then uses these for creating $\mathbf{\Sigma}(\cdot)$. However, in contrast to the original formulation of $\mathbf{\Sigma}(\cdot)$, the elements in $\mathbf{L}(\cdot)$ do not have any direct interpretation. Since XGBoostLSS is based on a one vs. all estimation strategy, where a separate tree is grown for each distributional parameter, estimating many parameters for a large dataset can become computationally extremely expensive. For more details, we refer to our related paper März, Alexander (2022), Multi-Target XGBoostLSS Regression.
Imports¶
from xgboostlss.model import *
from xgboostlss.distributions.MVN import *
from xgboostlss.datasets.data_loader import load_simulated_multivariate_gaussian_data
from sklearn.model_selection import train_test_split
import pandas as pd
import multiprocessing
import plotnine
from plotnine import *
plotnine.options.figure_size = (15, 8)
n_cpu = multiprocessing.cpu_count()
Data¶
data_sim = load_simulated_multivariate_gaussian_data()
# Create 60%, 20%, 20% split for train, validation and test
train, validate, test = np.split(data_sim.sample(frac=1,random_state=123), [int(0.6*len(data_sim)), int(0.8*len(data_sim))])
# Train
x_train = train.filter(regex="x")
y_train = train.filter(regex="y").values
n_targets = y_train.shape[1]
dtrain = xgb.DMatrix(x_train, label=y_train, nthread=n_cpu)
# Validation
x_eval = validate.filter(regex="x")
y_eval = validate.filter(regex="y").values
deval = xgb.DMatrix(x_eval, label=y_eval, nthread=n_cpu)
# Test
x_test = test.filter(regex="x")
y_test = test.filter(regex="y").values
dtest = xgb.DMatrix(x_test, nthread=n_cpu)
Distribution Selection¶
# Specifies a multivariate Normal distribution, using the Cholesky decompoisition. See ?MVN for details.
xgblss = XGBoostLSS(
MVN(D=n_targets, # Specifies the number of targets
stabilization="None", # Options are "None", "MAD", "L2".
response_fn="exp", # Function to transform the lower-triangular factor of the covariance, e.g., "exp" or "softplus".
loss_fn="nll" # Loss function, i.e., nll.
)
)
Hyper-Parameter Optimization¶
Any XGBoost hyperparameter can be tuned, where the structure of the parameter dictionary needs to be as follows:
- Float/Int sample_type
- {"param_name": ["sample_type", low, high, log]}
- sample_type: str, Type of sampling, e.g., "float" or "int"
- low: int, Lower endpoint of the range of suggested values
- high: int, Upper endpoint of the range of suggested values
- log: bool, Flag to sample the value from the log domain or not
- Example: {"eta": "float", low=1e-5, high=1, log=True]}
- Categorical sample_type
- {"param_name": ["sample_type", ["choice1", "choice2", "choice3", "..."]]}
- sample_type: str, Type of sampling, either "categorical"
- choice1, choice2, choice3, ...: str, Possible choices for the parameter
- Example: {"booster": ["categorical", ["gbtree", "dart"]]}
- For parameters without tunable choice (this is needed if tree_method = "gpu_hist" and gpu_id needs to be specified)
- {"param_name": ["none", [value]]},
- param_name: str, Name of the parameter
- value: int, Value of the parameter
- Example: {"gpu_id": ["none", [0]]}
Depending on which parameters are optimized, it might happen that some of them are not used, e.g., when {"booster": ["categorical", ["gbtree", "gblinear"]]} and {"max_depth": ["int", 1, 10, False]} are specified, max_depth is not used when gblinear is sampled, since it has no such argument.
param_dict = {
"eta": ["float", {"low": 1e-5, "high": 1, "log": True}],
"max_depth": ["int", {"low": 1, "high": 10, "log": False}],
"gamma": ["float", {"low": 1e-8, "high": 40, "log": True}],
"subsample": ["float", {"low": 0.2, "high": 1.0, "log": False}],
"colsample_bytree": ["float", {"low": 0.2, "high": 1.0, "log": False}],
"min_child_weight": ["float", {"low": 1e-8, "high": 500, "log": True}],
}
np.random.seed(123)
opt_param = xgblss.hyper_opt(param_dict,
dtrain,
num_boost_round=100, # Number of boosting iterations.
nfold=5, # Number of cv-folds.
early_stopping_rounds=20, # Number of early-stopping rounds
max_minutes=120, # Time budget in minutes, i.e., stop study after the given number of minutes.
n_trials=20, # The number of trials. If this argument is set to None, there is no limitation on the number of trials.
silence=False, # Controls the verbosity of the trail, i.e., user can silence the outputs of the trail.
seed=123, # Seed used to generate cv-folds.
hp_seed=None # Seed for random number generator used in the Bayesian hyperparameter search.
)
[I 2023-06-21 15:39:06,449] A new study created in memory with name: XGBoostLSS Hyper-Parameter Optimization
C:\Users\maerzale\.virtualenvs\XGBoostLSS-vIPRRz-M\lib\site-packages\optuna\progress_bar.py:56: ExperimentalWarning: Progress bar is experimental (supported from v1.2.0). The interface can change in the future.
0%| | 0/20 [00:00<?, ?it/s]
[I 2023-06-21 15:39:17,255] Trial 0 finished with value: 5162.937597599999 and parameters: {'eta': 0.20893454787281918, 'max_depth': 10, 'gamma': 2.117653907946101e-07, 'subsample': 0.3778242409123365, 'colsample_bytree': 0.7646652449605631, 'min_child_weight': 0.0068161671789635555, 'booster': 'gbtree'}. Best is trial 0 with value: 5162.937597599999. [I 2023-06-21 15:40:00,473] Trial 1 finished with value: 5615.221093800001 and parameters: {'eta': 2.0070247688452328e-05, 'max_depth': 8, 'gamma': 0.010154272096580971, 'subsample': 0.4095786549905741, 'colsample_bytree': 0.3173248698114902, 'min_child_weight': 5.266646213105856e-07, 'booster': 'gbtree'}. Best is trial 0 with value: 5162.937597599999. [I 2023-06-21 15:40:23,144] Trial 2 finished with value: 4844.8836914 and parameters: {'eta': 0.053129283223949005, 'max_depth': 9, 'gamma': 0.011872025594766826, 'subsample': 0.6258677911742918, 'colsample_bytree': 0.5307037342612928, 'min_child_weight': 0.007088879352982488, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:41:01,850] Trial 3 finished with value: 5448.668457 and parameters: {'eta': 0.0009792045747793214, 'max_depth': 3, 'gamma': 0.005965701093393613, 'subsample': 0.26570560472634475, 'colsample_bytree': 0.5511661336690216, 'min_child_weight': 0.00841728555810122, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:41:45,467] Trial 4 finished with value: 5391.4463866 and parameters: {'eta': 0.0012827384462821868, 'max_depth': 4, 'gamma': 2.7160727410249663, 'subsample': 0.9204782279885162, 'colsample_bytree': 0.8491081744059874, 'min_child_weight': 3.1383290915998723e-07, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:42:32,043] Trial 5 finished with value: 5603.630371199999 and parameters: {'eta': 7.146246575427444e-05, 'max_depth': 6, 'gamma': 0.2964635108816084, 'subsample': 0.9097196785458086, 'colsample_bytree': 0.6069516489908919, 'min_child_weight': 4.318870728782616e-06, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:43:15,505] Trial 6 finished with value: 5617.1283204 and parameters: {'eta': 1.1899973708118953e-05, 'max_depth': 6, 'gamma': 5.352582892658812e-07, 'subsample': 0.46622819175454827, 'colsample_bytree': 0.554770428194767, 'min_child_weight': 0.0429520984148638, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914.
C:\Users\maerzale\.virtualenvs\XGBoostLSS-vIPRRz-M\lib\site-packages\numpy\core\_methods.py:236: RuntimeWarning: invalid value encountered in subtract
[I 2023-06-21 15:43:23,423] Trial 7 finished with value: 5441.223730399999 and parameters: {'eta': 0.8315618584272169, 'max_depth': 3, 'gamma': 3.296424115558854e-07, 'subsample': 0.29170766005063103, 'colsample_bytree': 0.6394250150688834, 'min_child_weight': 7.555101370585528e-06, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:44:07,731] Trial 8 finished with value: 5606.3771486 and parameters: {'eta': 5.832213700590699e-05, 'max_depth': 8, 'gamma': 1.145295949576388, 'subsample': 0.35811178726881004, 'colsample_bytree': 0.20918726709726698, 'min_child_weight': 0.02173794896694409, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:44:59,597] Trial 9 finished with value: 5372.5990234 and parameters: {'eta': 0.001337350234107957, 'max_depth': 9, 'gamma': 2.2578807601790856, 'subsample': 0.6656175415109962, 'colsample_bytree': 0.4409807088407116, 'min_child_weight': 0.6191950232548107, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:45:37,451] Trial 10 finished with value: 4963.552832200001 and parameters: {'eta': 0.020214616936099314, 'max_depth': 1, 'gamma': 5.4090428571824593e-05, 'subsample': 0.6242144518968669, 'colsample_bytree': 0.9711374188863923, 'min_child_weight': 253.58783025671636, 'booster': 'gbtree'}. Best is trial 2 with value: 4844.8836914. [I 2023-06-21 15:46:16,217] Trial 11 finished with value: 4787.162793 and parameters: {'eta': 0.03250325203864636, 'max_depth': 1, 'gamma': 7.634813502335532e-05, 'subsample': 0.6213498556806237, 'colsample_bytree': 0.9866300303809099, 'min_child_weight': 90.86326858347493, 'booster': 'gbtree'}. Best is trial 11 with value: 4787.162793. [I 2023-06-21 15:46:53,909] Trial 12 finished with value: 4837.803710800001 and parameters: {'eta': 0.030840557287903813, 'max_depth': 1, 'gamma': 0.00011318123467778797, 'subsample': 0.7061200760432924, 'colsample_bytree': 0.9797397123588379, 'min_child_weight': 380.963041039335, 'booster': 'gbtree'}. Best is trial 11 with value: 4787.162793. [I 2023-06-21 15:47:32,084] Trial 13 finished with value: 5170.946582 and parameters: {'eta': 0.01026561801626179, 'max_depth': 1, 'gamma': 5.032482917050433e-05, 'subsample': 0.7538363610149874, 'colsample_bytree': 0.9776216229391541, 'min_child_weight': 393.835371505871, 'booster': 'gbtree'}. Best is trial 11 with value: 4787.162793. [I 2023-06-21 15:48:11,659] Trial 14 finished with value: 4286.694922000001 and parameters: {'eta': 0.06058920928573687, 'max_depth': 2, 'gamma': 2.8407158704437237e-05, 'subsample': 0.5214068113552733, 'colsample_bytree': 0.8185136492497096, 'min_child_weight': 8.847572679915343, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. [I 2023-06-21 15:48:32,658] Trial 15 finished with value: 4316.5522458 and parameters: {'eta': 0.11477631898078135, 'max_depth': 3, 'gamma': 1.0516481702425134e-05, 'subsample': 0.5197030773890023, 'colsample_bytree': 0.8448440840546746, 'min_child_weight': 3.1313768586638613, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. [I 2023-06-21 15:48:46,484] Trial 16 finished with value: 4370.0099608 and parameters: {'eta': 0.19875480939916101, 'max_depth': 4, 'gamma': 1.0109162985021869e-08, 'subsample': 0.5022808667297098, 'colsample_bytree': 0.7866642009834267, 'min_child_weight': 4.345993799264355, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. [I 2023-06-21 15:49:27,147] Trial 17 finished with value: 4908.095703199999 and parameters: {'eta': 0.006579646550613866, 'max_depth': 3, 'gamma': 5.35438766328752e-06, 'subsample': 0.5351827661440116, 'colsample_bytree': 0.7084637390592092, 'min_child_weight': 3.6842525005877325, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. [I 2023-06-21 15:49:42,850] Trial 18 finished with value: 4520.944531200001 and parameters: {'eta': 0.121762230472375, 'max_depth': 5, 'gamma': 0.000831583405088257, 'subsample': 0.2101234444274785, 'colsample_bytree': 0.8632708900034219, 'min_child_weight': 0.00038829224114168875, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. [I 2023-06-21 15:49:53,846] Trial 19 finished with value: 4471.4682618 and parameters: {'eta': 0.6702579419423851, 'max_depth': 2, 'gamma': 7.1961916336615635e-06, 'subsample': 0.5463181007708755, 'colsample_bytree': 0.6886127484164956, 'min_child_weight': 0.4353567596302621, 'booster': 'gbtree'}. Best is trial 14 with value: 4286.694922000001. Hyper-Parameter Optimization successfully finished. Number of finished trials: 20 Best trial: Value: 4286.694922000001 Params: eta: 0.06058920928573687 max_depth: 2 gamma: 2.8407158704437237e-05 subsample: 0.5214068113552733 colsample_bytree: 0.8185136492497096 min_child_weight: 8.847572679915343 booster: gbtree opt_rounds: 100
Model Training¶
np.random.seed(123)
opt_params = opt_param.copy()
n_rounds = opt_params["opt_rounds"]
del opt_params["opt_rounds"]
# Add evaluation set
eval_set = [(dtrain,"train"), (deval,"evaluation")]
eval_result = {}
# Train Model with optimized hyperparameters
xgblss.train(
opt_params,
dtrain,
num_boost_round=n_rounds,
evals=eval_set,
evals_result=eval_result,
verbose_eval=False
)
# Note that XGBoostLSS uses NLL.sum() instead of NLL.mean() for training, so that train-nll and evaluation-nll are not comparable. Hence we manually adjust them.
n_train = y_train.shape[0],
n_eval = y_eval.shape[0]
eval_df = pd.DataFrame.from_dict({"train": np.array(eval_result["train"]["nll"]) / n_train,
"evaluation": np.array(eval_result["evaluation"]["nll"]) / n_eval
})
eval_df["iter"] = eval_df.index + 1
eval_df = eval_df.melt(id_vars="iter")
(
ggplot(eval_df,
aes(x="iter",
y="value",
color="variable")) +
geom_line() +
labs(title="XGBoostLSS Train and Eval Loss",
x="Iteration",
y="NLL") +
theme_bw(base_size=15) +
theme(legend_position="bottom",
legend_title = element_blank()
)
)
<Figure Size: (1500 x 800)>
Prediction¶
# Set seed for reproducibility
torch.manual_seed(123)
# Number of samples to draw from predicted distribution
n_samples = 1000
quant_sel = [0.05, 0.95] # Quantiles to calculate from predicted distribution
# Sample from predicted distribution
pred_samples = xgblss.predict(dtest,
pred_type="samples",
n_samples=n_samples,
seed=123)
# Calculate quantiles from predicted distribution
pred_quantiles = xgblss.predict(dtest,
pred_type="quantiles",
n_samples=n_samples,
quantiles=quant_sel)
# Returns predicted distributional parameters
pred_params = xgblss.predict(dtest,
pred_type="parameters")
pred_samples.head()
target | y_sample0 | y_sample1 | y_sample2 | y_sample3 | y_sample4 | y_sample5 | y_sample6 | y_sample7 | y_sample8 | ... | y_sample990 | y_sample991 | y_sample992 | y_sample993 | y_sample994 | y_sample995 | y_sample996 | y_sample997 | y_sample998 | y_sample999 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | y1 | 0.414075 | 0.491355 | -2.204463 | 1.118536 | 0.799598 | -0.808953 | 0.503679 | 0.536301 | 0.460093 | ... | 0.216323 | -0.137377 | 0.882786 | -0.200797 | 1.599728 | 0.097518 | -0.303666 | 0.963509 | 0.540205 | 1.019249 |
1 | y1 | -0.515316 | -0.208298 | 0.245381 | -0.562435 | -1.043730 | -0.003584 | -0.329167 | -0.333352 | -1.254039 | ... | 1.035885 | -0.496455 | -0.377378 | -0.116228 | -1.116618 | -0.365776 | -0.970788 | -0.095413 | -0.392687 | 0.167141 |
2 | y1 | 0.087613 | -1.432872 | 0.006057 | 1.032893 | -0.461051 | 1.409876 | -0.622818 | -0.431813 | 1.375422 | ... | 1.426233 | 1.586068 | -0.430848 | 0.804778 | 2.282571 | 0.122337 | 1.021088 | -0.141569 | 1.032713 | 1.377607 |
3 | y1 | -0.742849 | 0.773548 | 1.330858 | -0.962315 | -0.727351 | 0.049406 | -0.036508 | 0.444999 | 0.539441 | ... | -0.178081 | -0.245807 | -0.766496 | 0.888358 | 1.313139 | -0.147749 | 0.589192 | 0.082883 | 0.712869 | -0.539290 |
4 | y1 | 0.255881 | -0.859476 | 1.528095 | 0.457150 | 0.242182 | 0.528674 | 0.876584 | -2.237432 | -0.371091 | ... | -0.486414 | 1.580334 | -1.094439 | 1.003691 | 0.804117 | -0.377214 | 0.846216 | -0.036490 | -0.103703 | 0.802419 |
5 rows × 1001 columns
pred_samples.tail()
target | y_sample0 | y_sample1 | y_sample2 | y_sample3 | y_sample4 | y_sample5 | y_sample6 | y_sample7 | y_sample8 | ... | y_sample990 | y_sample991 | y_sample992 | y_sample993 | y_sample994 | y_sample995 | y_sample996 | y_sample997 | y_sample998 | y_sample999 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5995 | y3 | 1.413094 | 0.381546 | -0.212279 | 1.741293 | 0.991063 | 1.171732 | 2.288861 | 0.900078 | 0.664901 | ... | 0.806882 | 0.874526 | -0.217397 | 3.533134 | -0.164479 | 2.194977 | 2.570879 | 0.730399 | 0.973593 | -1.497213 |
5996 | y3 | 2.158102 | 1.394563 | 3.242994 | 0.213498 | 1.474806 | 0.283532 | 2.497092 | 1.044021 | 2.512460 | ... | -0.022285 | 2.522223 | -1.983054 | 2.284115 | 1.332664 | 0.264736 | 3.344530 | 3.071056 | 1.990921 | 1.795844 |
5997 | y3 | 0.505555 | 2.405000 | 1.607352 | 0.736530 | 3.229559 | -0.403253 | 0.122343 | 1.399570 | 3.189266 | ... | 1.941640 | 1.041186 | 0.176443 | 1.341864 | 1.205734 | 1.706667 | 1.127436 | 0.343830 | 2.762605 | 1.585057 |
5998 | y3 | 5.170230 | 2.698060 | 0.592193 | 2.146092 | 1.544515 | -0.511664 | 1.150948 | 0.950602 | 3.765107 | ... | 0.761528 | 1.445096 | 1.633901 | -0.146798 | 1.251341 | 0.813707 | -0.067860 | 1.498555 | 0.740474 | 2.235666 |
5999 | y3 | 2.086720 | 0.383206 | 0.542014 | 4.445408 | 2.756653 | -0.367652 | 2.138504 | -0.753445 | -0.308533 | ... | 2.104867 | 1.980759 | 1.712319 | 1.649312 | -0.272408 | 2.084510 | 0.941733 | 2.559954 | 3.854292 | 2.692068 |
5 rows × 1001 columns
pred_quantiles.head()
target | quant_0.05 | quant_0.95 | |
---|---|---|---|
0 | y1 | -0.863731 | 1.285580 |
1 | y1 | -1.129383 | 0.988352 |
2 | y1 | -1.402305 | 2.073464 |
3 | y1 | -1.129074 | 1.714211 |
4 | y1 | -1.169656 | 1.407705 |
pred_params.head()
location_1 | location_2 | location_3 | scale_1 | scale_2 | scale_3 | rho_12 | rho_13 | rho_23 | |
---|---|---|---|---|---|---|---|---|---|
0 | 0.194270 | 1.213351 | 0.962265 | 0.651525 | 1.578509 | 1.142826 | 0.748443 | 0.251087 | 0.674835 |
1 | -0.105770 | 0.614664 | 1.621492 | 0.696492 | 1.823852 | 1.627558 | 0.881418 | 0.625260 | 0.575634 |
2 | 0.317337 | 0.471097 | 1.540956 | 1.045922 | 1.659427 | 1.070353 | 0.696792 | 0.721034 | 0.468325 |
3 | 0.297752 | 0.471097 | 1.540956 | 0.872619 | 1.324082 | 1.104253 | 0.873266 | 0.689329 | 0.461507 |
4 | 0.116206 | 0.600470 | 1.767256 | 0.778658 | 1.614878 | 1.371813 | 0.763025 | 0.766575 | 0.576094 |
SHAP Interpretability¶
Since XGboostlss estimates the covariance matrix $\mathbf{\Sigma}(x) = \mathbf{L}(x) \mathbf{L}^{\prime}(x)$ via the Cholesky factors, interpretability is only sensible for the location parameters. The following parameters have been estimated
list(xgblss.dist.param_dict.keys())
['location_1', 'location_2', 'location_3', 'scale_tril_diag_1', 'scale_tril_offdiag_12', 'scale_tril_diag_2', 'scale_tril_offdiag_13', 'scale_tril_offdiag_23', 'scale_tril_diag_3']
# Partial Dependence Plot of how x acts on location_2
xgblss.plot(x_test,
parameter="location_2",
feature="x",
plot_type="Partial_Dependence")
C:\Users\maerzale\.virtualenvs\XGBoostLSS-vIPRRz-M\lib\site-packages\numpy\lib\function_base.py:2854: RuntimeWarning: invalid value encountered in divide C:\Users\maerzale\.virtualenvs\XGBoostLSS-vIPRRz-M\lib\site-packages\numpy\lib\function_base.py:2855: RuntimeWarning: invalid value encountered in divide
# Feature Importance of location_2 parameter
xgblss.plot(x_test,
parameter="location_2",
plot_type="Feature_Importance")
True vs. Predicted Distributional Parameters¶
In the following figure, we compare the true parameters of the multivariate Gaussian with the ones predicted by XGBoostLSS. The below figure shows that the estimated parameters closely match the true ones.
dist_params = pred_params.columns.tolist()
pred_params["x"] = test["x"].values
drop_cols = [resp for resp in list(test.columns) if "y" in resp]
plot_df_actual = pd.melt(test.drop(columns=drop_cols, axis=0),
id_vars="x",
value_vars=dist_params)
plot_df_actual["type"] = "ACTUAL"
plot_df_fitted = pd.melt(pred_params,
id_vars="x",
value_vars=dist_params)
plot_df_fitted["type"] = "FIT"
plot_df = pd.concat([plot_df_actual, plot_df_fitted])
plot_df["variable"] = plot_df.variable.str.upper()
plot_params = (ggplot(plot_df,
aes(x="x",
y="value",
color="type")) +
geom_line(size=1.1) +
facet_wrap("variable",
# ncol=2,
scales="free") +
labs(title="Parameters of Trivariate Normal estimated with XGBoostLSS using Cholesky-Decomposition of Covariance-Matrix\n",
x="",
y="") +
theme_bw(base_size=15) +
theme(legend_position="bottom",
legend_title = element_blank()
)
)
print(plot_params)
Actual vs. Predicted¶
Since we predict the entire conditional distribution, we can overlay the point predictions with predicted densities, from which we can also derive quantiles of interest. We use the first response $y_{1}$ as an example.
y_pred = []
n_examples = 16
q_sel = [0.05, 0.95]
y_sel=0
samples_arr = pred_samples.drop(columns="target").values.reshape(n_targets,-1,n_samples)
for i in range(n_examples):
y_samples = pd.DataFrame(samples_arr[y_sel,i,:].reshape(-1,1), columns=["PREDICT_DENS"])
y_samples["PREDICT_POINT"] = y_samples["PREDICT_DENS"].mean()
y_samples["PREDICT_Q05"] = y_samples["PREDICT_DENS"].quantile(q=q_sel[0])
y_samples["PREDICT_Q95"] = y_samples["PREDICT_DENS"].quantile(q=q_sel[1])
y_samples["ACTUAL"] = y_test[i,y_sel]
y_samples["obs"]= f"Obervation {i+1}"
y_pred.append(y_samples)
pred_df = pd.melt(pd.concat(y_pred, axis=0), id_vars="obs")
pred_df["obs"] = pd.Categorical(pred_df["obs"], categories=[f"Obervation {i+1}" for i in range(n_examples)])
df_actual, df_pred_dens, df_pred_point, df_q05, df_q95 = [x for _, x in pred_df.groupby("variable")]
plot_pred = (
ggplot(pred_df,
aes(color="variable")) +
stat_density(df_pred_dens,
aes(x="value"),
size=1.1) +
geom_point(df_pred_point,
aes(x="value",
y=0),
size=1.4) +
geom_point(df_actual,
aes(x="value",
y=0),
size=1.4) +
geom_vline(df_q05,
aes(xintercept="value",
fill="variable",
color="variable"),
linetype="dashed",
size=1.1) +
geom_vline(df_q95,
aes(xintercept="value",
fill="variable",
color="variable"),
linetype="dashed",
size=1.1) +
facet_wrap("obs",
scales="free",
ncol=4) +
labs(title="Predicted vs. Actual \n",
x = "") +
theme_bw(base_size=15) +
scale_fill_brewer(type="qual", palette="Dark2") +
theme(legend_position="bottom",
legend_title = element_blank()
)
)
print(plot_pred)