Documentation Index
Fetch the complete documentation index at: https://nixtlaverse.nixtla.io/llms.txt
Use this file to discover all available pages before exploring further.
Here we provide a collection of methods designed to provide
hierarchically coherent probabilistic distributions, which means that
they generate samples of multivariate time series with hierarchical
linear constraints.
We designed these methods to extend the core.HierarchicalForecast
capabilities class. Check their usage example
here.
1. Normality
Normality
Normality(S, P, y_hat, sigmah, W=None, seed=0, covariance_type='diagonal', residuals=None, shrinkage_ridge=_DEFAULT_SHRINKAGE_RIDGE)
Normality Probabilistic Reconciliation Class.
The Normality method leverages the Gaussian Distribution linearity, to
generate hierarchically coherent prediction distributions. This class is
meant to be used as the sampler input as other HierarchicalForecast reconciliation classes.
Given base forecasts under a normal distribution:
y^βhββΌN(ΞΌ^β,W^hβ)
The reconciled forecasts are also normally distributed:
y~βhββΌN(SPΞΌ^β,SPW^hβPβΊSβΊ)
Parameters:
| Name | Type | Description | Default |
|---|
S | Union[ndarray, spmatrix] | Summing matrix of size (base, bottom). | required |
P | Union[ndarray, spmatrix] | Reconciliation matrix of size (bottom, base). | required |
y_hat | ndarray | Point forecasts values of size (base, horizon). | required |
sigmah | ndarray | Forecast standard dev. of size (base, horizon). | required |
W | Union[ndarray, spmatrix] | Hierarchical covariance matrix of size (base, base). Required when covariance_type='diagonal' (default). Ignored when covariance_type is 'full' or 'shrink' (covariance is computed from residuals instead). Default is None. | None |
seed | int | Random seed for numpy generatorβs replicability. Default is 0. | 0 |
covariance_type | Union[str, CovarianceType] | Type of covariance estimator. Can be a string or CovarianceType enum. Options are: - 'diagonal' / CovarianceType.DIAGONAL: Uses the W matrix diagonal with correlation scaling (default, backward compatible). W is required. - 'full' / CovarianceType.FULL: Uses full empirical covariance from residuals. W is ignored. Warning: may be non-positive-definite if n_series > n_observations. - 'shrink' / CovarianceType.SHRINK: Uses SchΓ€fer-Strimmer shrinkage estimator. W is ignored. Recommended for numerical stability with many series. Default is 'diagonal'. | βdiagonalβ |
residuals | ndarray | Insample residuals of size (base, obs). Required when covariance_type is 'full' or 'shrink'. Default is None. | None |
shrinkage_ridge | float | Ridge parameter for shrinkage covariance estimator. Only used when covariance_type='shrink'. A warning is issued if provided with other covariance types. Default is 2e-8. | _DEFAULT_SHRINKAGE_RIDGE |
Raises:
| Type | Description |
|---|
ValueError | If covariance_type is invalid. |
ValueError | If covariance_type='diagonal' and W is None. |
ValueError | If covariance_type is 'full' or 'shrink' and residuals is None. |
ValueError | If residuals shape doesnβt match expected (base, obs). |
ValueError | If residuals has fewer than 2 observations. |
ValueError | If residuals is empty. |
ValueError | If any series in residuals has all NaN values. |
Warns:
| Type | Description |
|---|
UserWarning | If shrinkage_ridge is provided but covariance_type is not 'shrink'. |
UserWarning | If W is provided but covariance_type is not 'diagonal' (W is ignored). |
UserWarning | If any series has zero or near-zero variance (may affect correlation estimates). |
UserWarning | If covariance_type='full' and n_series > n_observations (non-PSD risk). |
Examples:
>>> # Using diagonal covariance (default, backward compatible)
>>> normality = Normality(S=S, P=P, y_hat=y_hat, sigmah=sigmah, W=W)
>>> samples = normality.get_samples(num_samples=100)
>>> # Using full empirical covariance from residuals
>>> normality = Normality(
... S=S, P=P, y_hat=y_hat, sigmah=sigmah,
... covariance_type="full", residuals=residuals
... )
>>> # Using shrinkage covariance (recommended for stability)
>>> normality = Normality(
... S=S, P=P, y_hat=y_hat, sigmah=sigmah,
... covariance_type=CovarianceType.SHRINK, residuals=residuals
... )
Normality.get_samples
Normality Coherent Samples.
Obtains coherent samples under the Normality assumptions.
Parameters:
| Name | Type | Description | Default |
|---|
num_samples | int | number of samples generated from coherent distribution. | required |
Returns:
| Name | Type | Description |
|---|
samples | ndarray | Coherent samples of size (base, horizon, num_samples). |
2. Bootstrap
Bootstrap
Bootstrap(S, P, y_hat, y_insample, y_hat_insample, num_samples=100, seed=0, W=None)
Bootstrap Probabilistic Reconciliation Class.
This method goes beyond the normality assumption for the base forecasts,
the technique simulates future sample paths and uses them to generate
base sample paths that are latered reconciled. This clever idea and its
simplicity allows to generate coherent bootstraped prediction intervals
for any reconciliation strategy. This class is meant to be used as the sampler
input as other HierarchicalForecast reconciliation classes.
Given a boostraped set of simulated sample paths:
y^βtau[1]β,β¦,y^βtau[B]β)
The reconciled sample paths allow for reconciled distributional forecasts:
(SPy^βtau[1]β,β¦,SPy^βtau[B]β)
Parameters:
| Name | Type | Description | Default |
|---|
S | ndarray | spmatrix | np.array, summing matrix of size (base, bottom). | required |
P | ndarray | spmatrix | np.array, reconciliation matrix of size (bottom, base). | required |
y_hat | ndarray | Point forecasts values of size (base, horizon). | required |
y_insample | ndarray | Insample values of size (base, insample_size). | required |
y_hat_insample | ndarray | Insample point forecasts of size (base, insample_size). | required |
num_samples | int | int, number of bootstraped samples generated. | 100 |
seed | int | int, random seed for numpy generatorβs replicability. | 0 |
Bootstrap.get_samples
Bootstrap Sample Reconciliation Method.
Applies Bootstrap sample reconciliation method as defined by Gamakumara 2020.
Generating independent sample paths and reconciling them with Bootstrap.
Parameters:
| Name | Type | Description | Default |
|---|
num_samples | int | int, number of samples generated from coherent distribution. | required |
Returns:
| Name | Type | Description |
|---|
samples | | Coherent samples of size (base, horizon, num_samples). |
3. PERMBU
PERMBU(S, tags, y_hat, y_insample, y_hat_insample, sigmah, num_samples=None, seed=0, P=None)
PERMBU Probabilistic Reconciliation Class.
The PERMBU method leverages empirical bottom-level marginal distributions
with empirical copula functions (describing bottom-level dependencies) to
generate the distribution of aggregate-level distributions using BottomUp
reconciliation. The sample reordering technique in the PERMBU method reinjects
multivariate dependencies into independent bottom-level samples.
residuals=Ο΅^i,tβ
Algorithm:
- For all series compute conditional marginals distributions.
- Compute
residuals and obtain rank permutations.
- Obtain K-sample from the bottom-level series predictions.
- Apply recursively through the hierarchical structure:
- For a given aggregate series i and its children series:
- Obtain childrenβs empirical joint using sample reordering copula.
- From the childrenβs joint obtain the aggregate seriesβs samples.
Parameters:
| Name | Type | Description | Default |
|---|
S | array | summing matrix of size (base, bottom). | required |
tags | dict[str, ndarray] | Each key is a level and each value its S indices. | required |
y_insample | array | Insample values of size (base, insample_size). | required |
y_hat_insample | array | Insample point forecasts of size (base, insample_size). | required |
sigmah | array | forecast standard dev. of size (base, horizon). | required |
num_samples | int | number of normal prediction samples generated. Default is None | None |
seed | int | random seed for numpy generatorβs replicability. Default is 0. | 0 |
PERMBU.get_samples
get_samples(num_samples=None)
PERMBU Sample Reconciliation Method.
Applies PERMBU reconciliation method as defined by Taieb et. al 2017.
Generating independent base prediction samples, restoring its multivariate
dependence using estimated copula with reordering and applying the BottomUp
aggregation to the new samples.
Parameters:
| Name | Type | Description | Default |
|---|
num_samples | int | number of samples generated from coherent distribution. | None |
Returns:
| Name | Type | Description |
|---|
samples | ndarray | Coherent samples of size (base, horizon, num_samples). |
References