LightGBMLSS
  • Home
  • Distributional Modelling
  • Available Distributions

Examples

  • Basic Walkthrough - Gaussian Regression
  • Expectile Regression
  • Gamma Regression (California Housing Data)
  • Gausssian-Mixture Regression (California Housing Data)
  • How to Select a Univariate Distribution
  • Spline Flow Regression
  • Zero-Adjusted Gamma Regression

API Docs

  • API references
LightGBMLSS
  • Examples
  • Basic Walkthrough - Gaussian Regression
  • Edit on GitHub

Basic Walkthrough - Gaussian Regression¶

Open in Colab

In this example, we model and predict all parameters of a univariate Normal distribution. Recall that distributional regression models and predicts all parameters $\theta_{ik}, k=1, \ldots, K$ parameters of a distribution $\mathcal{D}$ as a function of covariates:

\begin{equation} y_{i} \stackrel{ind}{\sim} \mathcal{D} \begin{pmatrix} h_{1}\bigl(\theta_{i1}(x_{i})\bigr) = \eta_{i1} \\ h_{2}\bigl(\theta_{i2}(x_{i})\bigr) = \eta_{i2} \\ \vdots \\ h_{K}\bigl(\theta_{iK}(x_{i})\bigr) = \eta_{iK} \end{pmatrix} \quad ,i=1, \ldots, N. \end{equation}

where $h_{k}(\cdot)$ transforms each distributional parameter to the corresponding parameter scale. For the univariate Normal case, we can specify the above as $y_{i} \stackrel{ind}{\sim} \mathcal{N}\bigl(\mu_{i}(x_{i}), \sigma_{i}(x_{i})\bigr)$. Since $\mu_{i}(\cdot) \in \mathbb{R}$ and since the standard-deviation cannot be negative, $h_{k}(\cdot)$ is applied to $\sigma_{i}(\cdot)$ only. Typical choices are the exponential or the softplus function.

Imports¶

First, we import the necessary functions.

In [2]:
Copied!
from lightgbmlss.model import *
from lightgbmlss.distributions.Gaussian import *
from lightgbmlss.datasets.data_loader import load_simulated_gaussian_data
from scipy.stats import norm

import plotnine
from plotnine import *
plotnine.options.figure_size = (12, 8)
from lightgbmlss.model import * from lightgbmlss.distributions.Gaussian import * from lightgbmlss.datasets.data_loader import load_simulated_gaussian_data from scipy.stats import norm import plotnine from plotnine import * plotnine.options.figure_size = (12, 8)

Data¶

The data is simulated as a Gaussian, where $x_{true}$ is the only true feature and all others are noise variables:

  • $\mu(x_{true}) = 10$
  • $\sigma(x_{true}) = 1 + 4 * \bigr((0.3 < x_{true}) \& (x_{true} < 0.5)\bigl) + 2 * (x_{true} > 0.7)$

We first load the simulated dataset, filter for the target and covariates and then create the lgb.Dataset. LightGBMLSS is designed to closely resemble the usage of LightGBM, ensuring ease of adoption and full compatibility.

In [3]:
Copied!
train, test = load_simulated_gaussian_data()

X_train, y_train = train.filter(regex="x"), train["y"].values
X_test, y_test = test.filter(regex="x"), test["y"].values

dtrain = lgb.Dataset(X_train, label=y_train)
train, test = load_simulated_gaussian_data() X_train, y_train = train.filter(regex="x"), train["y"].values X_test, y_test = test.filter(regex="x"), test["y"].values dtrain = lgb.Dataset(X_train, label=y_train)

Distribution Selection¶

Next, we specify a Gaussian distribution. By modifying the specification in the following, the user can specify alternative distributional assumptions. This includes the option to choose from a wide range of parametric univariate distributions, as well as to model the data using Normalizing Flows. The user also has different function arguments for each distribution:

  • stabilization: specifies the stabilization method for the Gradient and Hessian. Options are None, MAD and L2.
  • response_fn: specifies $h_{k}(\cdot)$ and transforms the distributional parameter to the correct support. Here, we specify an exponential for $\sigma_{i}(\cdot)$ only.
  • loss_fn: specifies the loss function used for training. Options are nll (negative log-likelihood) or crps (continuous ranked probability score).

For additional details, see ?Gaussian.

In [4]:
Copied!
lgblss = LightGBMLSS(
    Gaussian(stabilization="None",  
             response_fn="exp",      
             loss_fn="nll"          
            )
)
lgblss = LightGBMLSS( Gaussian(stabilization="None", response_fn="exp", loss_fn="nll" ) )

Hyper-Parameter Optimization¶

Any LightGBM hyperparameter can be tuned, where the structure of the parameter dictionary needs to be as follows:

- Float/Int sample_type
    - {"param_name": ["sample_type", low, high, log]}
        - sample_type: str, Type of sampling, e.g., "float" or "int"
        - low: int, Lower endpoint of the range of suggested values
        - high: int, Upper endpoint of the range of suggested values
        - log: bool, Flag to sample the value from the log domain or not
    - Example: {"eta": "float", low=1e-5, high=1, log=True]}

- Categorical sample_type
    - {"param_name": ["sample_type", ["choice1", "choice2", "choice3", "..."]]}
        - sample_type: str, Type of sampling, either "categorical"
        - choice1, choice2, choice3, ...: str, Possible choices for the parameter
    - Example: {"boosting": ["categorical", ["gbdt", "dart"]]}

- For parameters without tunable choice (this is needed if tree_method = "gpu_hist" and gpu_id needs to be specified)
    - {"param_name": ["none", [value]]},
        - param_name: str, Name of the parameter
        - value: int, Value of the parameter
    - Example: {"gpu_id": ["none", [0]]}
In [5]:
Copied!
param_dict = {
    "eta":                      ["float", {"low": 1e-5,   "high": 1,     "log": True}],
    "max_depth":                ["int",   {"low": 1,      "high": 10,    "log": False}],
    "num_leaves":               ["int",   {"low": 255,    "high": 255,   "log": False}],  # set to constant for this example
    "min_data_in_leaf":         ["int",   {"low": 20,     "high": 20,    "log": False}],  # set to constant for this example
    "min_gain_to_split":        ["float", {"low": 1e-8,   "high": 40,    "log": False}],
    "min_sum_hessian_in_leaf":  ["float", {"low": 1e-8,   "high": 500,   "log": True}],
    "subsample":                ["float", {"low": 0.2,    "high": 1.0,   "log": False}],
    "feature_fraction":         ["float", {"low": 0.2,    "high": 1.0,   "log": False}],
    "boosting":                 ["categorical", ["gbdt"]],
}

np.random.seed(123)
opt_param = lgblss.hyper_opt(param_dict,
                             dtrain,
                             num_boost_round=100,        # Number of boosting iterations.
                             nfold=5,                    # Number of cv-folds.
                             early_stopping_rounds=20,   # Number of early-stopping rounds
                             max_minutes=10,             # Time budget in minutes, i.e., stop study after the given number of minutes.
                             n_trials=30 ,               # The number of trials. If this argument is set to None, there is no limitation on the number of trials.
                             silence=True,               # Controls the verbosity of the trail, i.e., user can silence the outputs of the trail.
                             seed=123,                   # Seed used to generate cv-folds.
                             hp_seed=123                 # Seed for random number generator used in the Bayesian hyperparameter search.
                            )
param_dict = { "eta": ["float", {"low": 1e-5, "high": 1, "log": True}], "max_depth": ["int", {"low": 1, "high": 10, "log": False}], "num_leaves": ["int", {"low": 255, "high": 255, "log": False}], # set to constant for this example "min_data_in_leaf": ["int", {"low": 20, "high": 20, "log": False}], # set to constant for this example "min_gain_to_split": ["float", {"low": 1e-8, "high": 40, "log": False}], "min_sum_hessian_in_leaf": ["float", {"low": 1e-8, "high": 500, "log": True}], "subsample": ["float", {"low": 0.2, "high": 1.0, "log": False}], "feature_fraction": ["float", {"low": 0.2, "high": 1.0, "log": False}], "boosting": ["categorical", ["gbdt"]], } np.random.seed(123) opt_param = lgblss.hyper_opt(param_dict, dtrain, num_boost_round=100, # Number of boosting iterations. nfold=5, # Number of cv-folds. early_stopping_rounds=20, # Number of early-stopping rounds max_minutes=10, # Time budget in minutes, i.e., stop study after the given number of minutes. n_trials=30 , # The number of trials. If this argument is set to None, there is no limitation on the number of trials. silence=True, # Controls the verbosity of the trail, i.e., user can silence the outputs of the trail. seed=123, # Seed used to generate cv-folds. hp_seed=123 # Seed for random number generator used in the Bayesian hyperparameter search. )
  0%|          | 0/30 [00:00<?, ?it/s]
Hyper-Parameter Optimization successfully finished.
  Number of finished trials:  30
  Best trial:
    Value: 2916.681710293905
    Params: 
    eta: 0.04397442966488241
    max_depth: 3
    num_leaves: 255
    min_data_in_leaf: 20
    min_gain_to_split: 9.411991003859868
    min_sum_hessian_in_leaf: 462.25410582290624
    subsample: 0.9517916433037966
    feature_fraction: 0.5063785131055764
    boosting: gbdt
    opt_rounds: 97

Model Training¶

We use the optimized hyper-parameters and train the model.

In [6]:
Copied!
np.random.seed(123)

opt_params = opt_param.copy()
n_rounds = opt_params["opt_rounds"]
del opt_params["opt_rounds"]

# Train Model with optimized hyperparameters
lgblss.train(opt_params,
             dtrain,
             num_boost_round=n_rounds
             )
np.random.seed(123) opt_params = opt_param.copy() n_rounds = opt_params["opt_rounds"] del opt_params["opt_rounds"] # Train Model with optimized hyperparameters lgblss.train(opt_params, dtrain, num_boost_round=n_rounds )

Prediction¶

Similar to a LightGBM model, we now predict from the trained model. Different options are available:

  • samples: draws n_samples from the predicted distribution.
  • quantiles: calculates quantiles from the predicted distribution.
  • parameters: returns predicted distributional parameters.
In [7]:
Copied!
# Set seed for reproducibility
torch.manual_seed(123)

# Number of samples to draw from predicted distribution
n_samples = 1000
quant_sel = [0.05, 0.95] # Quantiles to calculate from predicted distribution

# Sample from predicted distribution
pred_samples = lgblss.predict(X_test,
                              pred_type="samples",
                              n_samples=n_samples,
                              seed=123)

# Calculate quantiles from predicted distribution
pred_quantiles = lgblss.predict(X_test,
                                pred_type="quantiles",
                                n_samples=n_samples,
                                quantiles=quant_sel)

# Return predicted distributional parameters
pred_params = lgblss.predict(X_test,
                             pred_type="parameters")
# Set seed for reproducibility torch.manual_seed(123) # Number of samples to draw from predicted distribution n_samples = 1000 quant_sel = [0.05, 0.95] # Quantiles to calculate from predicted distribution # Sample from predicted distribution pred_samples = lgblss.predict(X_test, pred_type="samples", n_samples=n_samples, seed=123) # Calculate quantiles from predicted distribution pred_quantiles = lgblss.predict(X_test, pred_type="quantiles", n_samples=n_samples, quantiles=quant_sel) # Return predicted distributional parameters pred_params = lgblss.predict(X_test, pred_type="parameters")
In [8]:
Copied!
pred_samples.head()
pred_samples.head()
Out[8]:
y_sample0 y_sample1 y_sample2 y_sample3 y_sample4 y_sample5 y_sample6 y_sample7 y_sample8 y_sample9 ... y_sample990 y_sample991 y_sample992 y_sample993 y_sample994 y_sample995 y_sample996 y_sample997 y_sample998 y_sample999
0 10.969691 8.598016 11.316233 11.196012 -0.772420 10.135987 14.128653 7.630763 12.698465 8.239548 ... 10.933374 11.149927 8.531628 9.486668 10.501742 9.149441 11.645898 7.640362 9.958486 6.289453
1 9.461757 9.600316 10.147589 12.562353 16.377707 10.912953 9.325396 9.916863 7.682811 17.506680 ... 10.509146 7.077166 2.610499 8.764830 16.280003 8.840451 8.440387 16.157856 9.217056 8.147191
2 9.655623 10.095434 10.497488 8.415718 10.877867 8.264465 10.300594 11.423710 10.838738 9.795403 ... 11.465590 10.625634 8.727895 9.756786 8.575561 9.242844 9.997345 10.150554 10.739549 9.458581
3 7.315505 15.109694 9.312259 -3.643276 12.262859 8.238321 7.009054 6.200121 3.878844 13.135740 ... 19.848660 15.314698 8.589100 12.168641 8.016486 13.352606 9.907011 8.088248 12.577316 7.965759
4 11.067125 10.200111 9.035689 8.689579 10.475836 8.348648 6.953860 7.018825 11.546731 4.925195 ... 9.588341 12.277424 9.194149 13.164710 11.918127 7.771739 10.544153 9.177776 7.839355 11.506623

5 rows × 1000 columns

In [9]:
Copied!
pred_quantiles.head()
pred_quantiles.head()
Out[9]:
quant_0.05 quant_0.95
0 5.405031 14.896293
1 5.210804 15.007315
2 8.298658 11.832828
3 2.962531 17.087187
4 4.807709 14.909888
In [10]:
Copied!
pred_params.head()
pred_params.head()
Out[10]:
loc scale
0 9.984035 2.921586
1 9.979074 2.909918
2 9.979074 1.065636
3 9.979074 4.529788
4 9.979074 3.121158

SHAP Interpretability¶

To get a deeper understanding of the data generating process, LightGBMLSS also provides attribute importance and partial dependence plots using the Shapley-Value approach.

In [11]:
Copied!
# Partial Dependence Plot of how x_true acts on variance
lgblss.plot(X_test,
            parameter="scale",
            feature="x_true",
            plot_type="Partial_Dependence")
# Partial Dependence Plot of how x_true acts on variance lgblss.plot(X_test, parameter="scale", feature="x_true", plot_type="Partial_Dependence")
No description has been provided for this image
No description has been provided for this image
In [12]:
Copied!
# Feature Importance of scale parameter
lgblss.plot(X_test,
            parameter="scale",
            plot_type="Feature_Importance")
# Feature Importance of scale parameter lgblss.plot(X_test, parameter="scale", plot_type="Feature_Importance")
No description has been provided for this image
No description has been provided for this image

Plot of Actual vs. Predicted Quantiles¶

In the following, we plot the predicted quantiles (blue) and compare them to the actual quantiles (dashed-black).

In [13]:
Copied!
np.random.seed(123)

###
# Actual Quantiles
###
q1 = norm.ppf(quant_sel[0], loc = 10, scale = 1 + 4*((0.3 < test["x_true"].values) & (test["x_true"].values < 0.5)) + 2*(test["x_true"].values > 0.7))
q2 = norm.ppf(quant_sel[1], loc = 10, scale = 1 + 4*((0.3 < test["x_true"].values) & (test["x_true"].values < 0.5)) + 2*(test["x_true"].values > 0.7))
test["quant"] = np.where(test["y"].values < q1, 0, np.where(test["y"].values < q2, 1, 2))
test["alpha"] = np.where(test["y"].values <= q1, 1, np.where(test["y"].values >= q2, 1, 0))
df_quantiles = test[test["alpha"] == 1]

# Lower Bound
yl = list(set(q1))
yl.sort()
yl = [yl[2],yl[0],yl[2],yl[1],yl[1]]
sfunl = pd.DataFrame({"x_true":[0, 0.3, 0.5, 0.7, 1], "y":yl})

# Upper Bound
yu = list(set(q2))
yu.sort()
yu = [yu[0],yu[2],yu[0],yu[1],yu[1]]
sfunu = pd.DataFrame({"x_true":[0, 0.3, 0.5, 0.7, 1], "y":yu})

###
# Predicted Quantiles
###
test["lb"] = pred_quantiles.iloc[:,0]
test["ub"] = pred_quantiles.iloc[:,1]

###
# Plot
###
(ggplot(test,
        aes("x_true",
            "y")) + 
 geom_point(alpha = 0.2, color = "black", size = 2) + 
 theme_bw(base_size=15) +
 theme(legend_position="none",
       plot_title = element_text(hjust = 0.5),
       plot_subtitle = element_text(hjust = 0.5)) +
 labs(title = "LightGBMLSS Regression - Simulated Data Example",
      subtitle = "Comparison of Actual (black) vs. Predicted Quantiles (blue)",
      x="x")  + 
 geom_line(aes("x_true",
               "ub"),
           size = 1,
           color = "blue", 
           alpha = 0.7) + 
 geom_line(aes("x_true",
               "lb"),
           size = 1,
           color = "blue", 
           alpha = 0.7) + 
 geom_point(df_quantiles,
            aes("x_true",
                "y"), 
            color = "red", 
            alpha = 0.7,
            size = 2) + 
 geom_step(sfunl,
           aes("x_true",
               "y"), 
           size = 1, 
           linetype = "dashed")  + 
 geom_step(sfunu,
           aes("x_true",
               "y"), 
           size = 1, 
           linetype = "dashed") 
)
np.random.seed(123) ### # Actual Quantiles ### q1 = norm.ppf(quant_sel[0], loc = 10, scale = 1 + 4*((0.3 < test["x_true"].values) & (test["x_true"].values < 0.5)) + 2*(test["x_true"].values > 0.7)) q2 = norm.ppf(quant_sel[1], loc = 10, scale = 1 + 4*((0.3 < test["x_true"].values) & (test["x_true"].values < 0.5)) + 2*(test["x_true"].values > 0.7)) test["quant"] = np.where(test["y"].values < q1, 0, np.where(test["y"].values < q2, 1, 2)) test["alpha"] = np.where(test["y"].values <= q1, 1, np.where(test["y"].values >= q2, 1, 0)) df_quantiles = test[test["alpha"] == 1] # Lower Bound yl = list(set(q1)) yl.sort() yl = [yl[2],yl[0],yl[2],yl[1],yl[1]] sfunl = pd.DataFrame({"x_true":[0, 0.3, 0.5, 0.7, 1], "y":yl}) # Upper Bound yu = list(set(q2)) yu.sort() yu = [yu[0],yu[2],yu[0],yu[1],yu[1]] sfunu = pd.DataFrame({"x_true":[0, 0.3, 0.5, 0.7, 1], "y":yu}) ### # Predicted Quantiles ### test["lb"] = pred_quantiles.iloc[:,0] test["ub"] = pred_quantiles.iloc[:,1] ### # Plot ### (ggplot(test, aes("x_true", "y")) + geom_point(alpha = 0.2, color = "black", size = 2) + theme_bw(base_size=15) + theme(legend_position="none", plot_title = element_text(hjust = 0.5), plot_subtitle = element_text(hjust = 0.5)) + labs(title = "LightGBMLSS Regression - Simulated Data Example", subtitle = "Comparison of Actual (black) vs. Predicted Quantiles (blue)", x="x") + geom_line(aes("x_true", "ub"), size = 1, color = "blue", alpha = 0.7) + geom_line(aes("x_true", "lb"), size = 1, color = "blue", alpha = 0.7) + geom_point(df_quantiles, aes("x_true", "y"), color = "red", alpha = 0.7, size = 2) + geom_step(sfunl, aes("x_true", "y"), size = 1, linetype = "dashed") + geom_step(sfunu, aes("x_true", "y"), size = 1, linetype = "dashed") )
No description has been provided for this image
Out[13]:
<Figure Size: (1200 x 800)>

True vs. Predicted Distributional Parameters¶

In the following figure, we compare the true parameters of the Gaussian with the ones predicted by LightGBMLSS. The below figure shows that the estimated parameters closely match the true ones (recall that the location parameter $\mu=10$ is simulated as being a constant).

In [14]:
Copied!
pred_params["x_true"] = X_test["x_true"].values
dist_params = list(lgblss.dist.param_dict.keys())

# Data with actual values
plot_df_actual = pd.melt(test[["x_true"] + dist_params],
                         id_vars="x_true",
                         value_vars=dist_params)
plot_df_actual["type"] = "TRUE"

# Data with predicted values
plot_df_predt = pd.melt(pred_params[["x_true"] + dist_params],
                        id_vars="x_true",
                        value_vars=dist_params)
plot_df_predt["type"] = "PREDICT"

plot_df = pd.concat([plot_df_predt, plot_df_actual])

plot_df["variable"] = plot_df.variable.str.upper()
plot_df["type"] = pd.Categorical(plot_df["type"], categories = ["PREDICT", "TRUE"])

(ggplot(plot_df,
        aes(x="x_true",
            y="value",
            color="type")) +
 geom_line(size=1.1) + 
 facet_wrap("variable",
            scales="free") + 
 labs(title="Parameters of univariate Gaussian predicted with LightGBMLSS",
      x="",
      y="") + 
 theme_bw(base_size=15) + 
 theme(legend_position="bottom",
       plot_title = element_text(hjust = 0.5),
       legend_title = element_blank())
)
pred_params["x_true"] = X_test["x_true"].values dist_params = list(lgblss.dist.param_dict.keys()) # Data with actual values plot_df_actual = pd.melt(test[["x_true"] + dist_params], id_vars="x_true", value_vars=dist_params) plot_df_actual["type"] = "TRUE" # Data with predicted values plot_df_predt = pd.melt(pred_params[["x_true"] + dist_params], id_vars="x_true", value_vars=dist_params) plot_df_predt["type"] = "PREDICT" plot_df = pd.concat([plot_df_predt, plot_df_actual]) plot_df["variable"] = plot_df.variable.str.upper() plot_df["type"] = pd.Categorical(plot_df["type"], categories = ["PREDICT", "TRUE"]) (ggplot(plot_df, aes(x="x_true", y="value", color="type")) + geom_line(size=1.1) + facet_wrap("variable", scales="free") + labs(title="Parameters of univariate Gaussian predicted with LightGBMLSS", x="", y="") + theme_bw(base_size=15) + theme(legend_position="bottom", plot_title = element_text(hjust = 0.5), legend_title = element_blank()) )
No description has been provided for this image
Out[14]:
<Figure Size: (1200 x 800)>

Actual vs. Predicted¶

Since we predict the entire conditional distribution, we can overlay the point predictions with predicted densities, from which we can also derive quantiles of interest.

In [15]:
Copied!
y_pred = []

n_examples = 9

for i in range(n_examples):    
    y_samples = pd.DataFrame(pred_samples.values[i,:].reshape(-1,1), columns=["PREDICT_DENSITY"])
    y_samples["PREDICT_POINT"] = y_samples["PREDICT_DENSITY"].mean()
    y_samples["PREDICT_Q05"] = y_samples["PREDICT_DENSITY"].quantile(q=quant_sel[0])
    y_samples["PREDICT_Q95"] = y_samples["PREDICT_DENSITY"].quantile(q=quant_sel[1])
    y_samples["ACTUAL"] = y_test[i]
    y_samples["obs"]= f"Obervation {i+1}"
    y_pred.append(y_samples)
    
pred_df = pd.melt(pd.concat(y_pred, axis=0), id_vars="obs")
pred_df["obs"] = pd.Categorical(pred_df["obs"], categories=[f"Obervation {i+1}" for i in range(n_examples)])
df_actual, df_pred_dens, df_pred_point, df_q05, df_q95 = [x for _, x in pred_df.groupby("variable")]

plot_pred = (
    ggplot(pred_df,
           aes(color="variable")) + 
    stat_density(df_pred_dens,
                 aes(x="value"),
                 size=1.1) + 
    geom_point(df_pred_point,
               aes(x="value",
                   y=0),
               size=1.4) + 
    geom_point(df_actual,
               aes(x="value",
                   y=0),
               size=1.4) + 
    geom_vline(df_q05, 
               aes(xintercept="value",
                   fill="variable",
                   color="variable"),
               linetype="dashed",
               size=1.1) + 
    geom_vline(df_q95, 
               aes(xintercept="value",
                   fill="variable",
                   color="variable"),
               linetype="dashed",
               size=1.1) + 
    facet_wrap("obs",
               scales="free",
               ncol=3) + 
    labs(title="Predicted vs. Actual \n",
         x = "") + 
    theme_bw(base_size=15) +
    scale_fill_brewer(type="qual", palette="Dark2") + 
    theme(legend_position="bottom",
          plot_title = element_text(hjust = 0.5),
          legend_title = element_blank()
         )
)

print(plot_pred)
y_pred = [] n_examples = 9 for i in range(n_examples): y_samples = pd.DataFrame(pred_samples.values[i,:].reshape(-1,1), columns=["PREDICT_DENSITY"]) y_samples["PREDICT_POINT"] = y_samples["PREDICT_DENSITY"].mean() y_samples["PREDICT_Q05"] = y_samples["PREDICT_DENSITY"].quantile(q=quant_sel[0]) y_samples["PREDICT_Q95"] = y_samples["PREDICT_DENSITY"].quantile(q=quant_sel[1]) y_samples["ACTUAL"] = y_test[i] y_samples["obs"]= f"Obervation {i+1}" y_pred.append(y_samples) pred_df = pd.melt(pd.concat(y_pred, axis=0), id_vars="obs") pred_df["obs"] = pd.Categorical(pred_df["obs"], categories=[f"Obervation {i+1}" for i in range(n_examples)]) df_actual, df_pred_dens, df_pred_point, df_q05, df_q95 = [x for _, x in pred_df.groupby("variable")] plot_pred = ( ggplot(pred_df, aes(color="variable")) + stat_density(df_pred_dens, aes(x="value"), size=1.1) + geom_point(df_pred_point, aes(x="value", y=0), size=1.4) + geom_point(df_actual, aes(x="value", y=0), size=1.4) + geom_vline(df_q05, aes(xintercept="value", fill="variable", color="variable"), linetype="dashed", size=1.1) + geom_vline(df_q95, aes(xintercept="value", fill="variable", color="variable"), linetype="dashed", size=1.1) + facet_wrap("obs", scales="free", ncol=3) + labs(title="Predicted vs. Actual \n", x = "") + theme_bw(base_size=15) + scale_fill_brewer(type="qual", palette="Dark2") + theme(legend_position="bottom", plot_title = element_text(hjust = 0.5), legend_title = element_blank() ) ) print(plot_pred)
No description has been provided for this image

Previous Next

Built with MkDocs using a theme provided by Read the Docs.
GitHub « Previous Next »