The most important train signal is the forecast error, which is the difference between the observed value yτy_{\tau} and the prediction y^τ\hat{y}_{\tau}, at time yτy_{\tau}:

eτ=yτy^ττ{t+1,,t+H} e_{\tau} = y_{\tau}-\hat{y}_{\tau} \qquad \qquad \tau \in \{t+1,\dots,t+H \}

The train loss summarizes the forecast errors in different evaluation metrics.

from utilsforecast.data import generate_series
models = ['model0', 'model1']
series = generate_series(10, n_models=2, level=[80])
series_pl = generate_series(10, n_models=2, level=[80], engine='polars')

1. Scale-dependent Errors

Mean Absolute Error (MAE)

MAE(yτ,y^τ)=1Hτ=t+1t+Hyτy^τ \mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} |y_{\tau} - \hat{y}_{\tau}|


source

mae

 mae
      (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFra
      me], models:List[str], id_col:str='unique_id', target_col:str='y')

*Mean Absolute Error (MAE)

MAE measures the relative prediction accuracy of a forecasting method by calculating the deviation of the prediction and the true value at a given time and averages these devations over the length of the series.*

TypeDefaultDetails
dfUnionInput dataframe with id, actual values and predictions.
modelsListColumns that identify the models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
def pd_vs_pl(pd_df, pl_df, models):
    np.testing.assert_allclose(
        pd_df[models].to_numpy(),
        pl_df.sort('unique_id').select(models).to_numpy(),
    )
pd_vs_pl(
    mae(series, models),
    mae(series_pl, models),
    models,
)

Mean Squared Error

MSE(yτ,y^τ)=1Hτ=t+1t+H(yτy^τ)2 \mathrm{MSE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} (y_{\tau} - \hat{y}_{\tau})^{2}


source

mse

 mse
      (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFra
      me], models:List[str], id_col:str='unique_id', target_col:str='y')

*Mean Squared Error (MSE)

MSE measures the relative prediction accuracy of a forecasting method by calculating the squared deviation of the prediction and the true value at a given time, and averages these devations over the length of the series.*

TypeDefaultDetails
dfUnionInput dataframe with id, actual values and predictions.
modelsListColumns that identify the models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    mse(series, models),
    mse(series_pl, models),
    models,
)

Root Mean Squared Error

RMSE(yτ,y^τ)=1Hτ=t+1t+H(yτy^τ)2 \mathrm{RMSE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \sqrt{\frac{1}{H} \sum^{t+H}_{\tau=t+1} (y_{\tau} - \hat{y}_{\tau})^{2}}


source

rmse

 rmse
       (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFr
       ame], models:List[str], id_col:str='unique_id', target_col:str='y')

*Root Mean Squared Error (RMSE)

RMSE measures the relative prediction accuracy of a forecasting method by calculating the squared deviation of the prediction and the observed value at a given time and averages these devations over the length of the series. Finally the RMSE will be in the same scale as the original time series so its comparison with other series is possible only if they share a common scale. RMSE has a direct connection to the L2 norm.*

TypeDefaultDetails
dfUnionInput dataframe with id, actual values and predictions.
modelsListColumns that identify the models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    rmse(series, models),
    rmse(series_pl, models),
    models,
)

2. Percentage Errors

Mean Absolute Percentage Error

MAPE(yτ,y^τ)=1Hτ=t+1t+Hyτy^τyτ \mathrm{MAPE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{|y_{\tau}|}


source

mape

 mape
       (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFr
       ame], models:List[str], id_col:str='unique_id', target_col:str='y')

*Mean Absolute Percentage Error (MAPE)

MAPE measures the relative prediction accuracy of a forecasting method by calculating the percentual deviation of the prediction and the observed value at a given time and averages these devations over the length of the series. The closer to zero an observed value is, the higher penalty MAPE loss assigns to the corresponding error.*

TypeDefaultDetails
dfUnionInput dataframe with id, actual values and predictions.
modelsListColumns that identify the models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    mape(series, models),
    mape(series_pl, models),
    models,
)

Symmetric Mean Absolute Percentage Error

SMAPE2(yτ,y^τ)=1Hτ=t+1t+Hyτy^τyτ+y^τ \mathrm{SMAPE}_{2}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{|y_{\tau}|+|\hat{y}_{\tau}|}

source

smape

 smape
        (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataF
        rame], models:List[str], id_col:str='unique_id',
        target_col:str='y')

*Symmetric Mean Absolute Percentage Error (SMAPE)

SMAPE measures the relative prediction accuracy of a forecasting method by calculating the relative deviation of the prediction and the observed value scaled by the sum of the absolute values for the prediction and observed value at a given time, then averages these devations over the length of the series. This allows the SMAPE to have bounds between 0% and 100% which is desirable compared to normal MAPE that may be undetermined when the target is zero.*

TypeDefaultDetails
dfUnionInput dataframe with id, actual values and predictions.
modelsListColumns that identify the models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    smape(series, models),
    smape(series_pl, models),
    models,
)

3. Scale-independent Errors

Mean Absolute Scaled Error

MASE(yτ,y^τ,y^τseason)=1Hτ=t+1t+Hyτy^τMAE(yτ,y^τseason) \mathrm{MASE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}, \mathbf{\hat{y}}^{season}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{\mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{season}_{\tau})}


source

mase

 mase
       (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFr
       ame], models:List[str], seasonality:int, train_df:Union[pandas.core
       .frame.DataFrame,polars.dataframe.frame.DataFrame],
       id_col:str='unique_id', target_col:str='y')

*Mean Absolute Scaled Error (MASE)

MASE measures the relative prediction accuracy of a forecasting method by comparinng the mean absolute errors of the prediction and the observed value against the mean absolute errors of the seasonal naive model. The MASE partially composed the Overall Weighted Average (OWA), used in the M4 Competition.*

TypeDefaultDetails
dfUnionInput dataframe with id, actuals and predictions.
modelsListColumns that identify the models predictions.
seasonalityintMain frequency of the time series;
Hourly 24, Daily 7, Weekly 52, Monthly 12, Quarterly 4, Yearly 1.
train_dfUnionTraining dataframe with id and actual values. Must be sorted by time.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    mase(series, models, 7, series),
    mase(series_pl, models, 7, series_pl),
    models,
)

Relative Mean Absolute Error

RMAE(yτ,y^τ,y^τbase)=1Hτ=t+1t+Hyτy^τMAE(yτ,y^τbase) \mathrm{RMAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}, \mathbf{\hat{y}}^{base}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{\mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{base}_{\tau})}


source

rmae

 rmae
       (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.DataFr
       ame], models:List[str], baseline_models:List[str],
       id_col:str='unique_id', target_col:str='y')

*Relative Mean Absolute Error (RMAE)

Calculates the RAME between two sets of forecasts (from two different forecasting methods). A number smaller than one implies that the forecast in the numerator is better than the forecast in the denominator.*

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsListColumns that identify the models predictions.
baseline_modelsListColumns that identify the baseline models predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    rmae(series, models, list(reversed(models))),
    rmae(series_pl, models, list(reversed(models))),
    [f'{m1}_div_{m2}' for m1, m2 in zip(models, reversed(models))],
)

4. Probabilistic Errors

Quantile Loss

QL(yτ,y^τ(q))=1Hτ=t+1t+H((1q)(y^τ(q)yτ)++q(yτy^τ(q))+) \mathrm{QL}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{(q)}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \Big( (1-q)\,( \hat{y}^{(q)}_{\tau} - y_{\tau} )_{+} + q\,( y_{\tau} - \hat{y}^{(q)}_{\tau} )_{+} \Big)


source

quantile_loss

 quantile_loss
                (df:Union[pandas.core.frame.DataFrame,polars.dataframe.fra
                me.DataFrame], models:Dict[str,str], q:float=0.5,
                id_col:str='unique_id', target_col:str='y')

*Quantile Loss (QL)

QL measures the deviation of a quantile forecast. By weighting the absolute deviation in a non symmetric way, the loss pays more attention to under or over estimation.
A common value for q is 0.5 for the deviation from the median.*

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsDictMapping from model name to the model predictions for the specified quantile.
qfloat0.5Quantile for the predictions’ comparison.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.

Multi-Quantile Loss

MQL(yτ,[y^τ(q1),...,y^τ(qn)])=1nqiQL(yτ,y^τ(qi)) \mathrm{MQL}(\mathbf{y}_{\tau}, [\mathbf{\hat{y}}^{(q_{1})}_{\tau}, ... ,\hat{y}^{(q_{n})}_{\tau}]) = \frac{1}{n} \sum_{q_{i}} \mathrm{QL}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{(q_{i})}_{\tau})


source

mqloss

 mqloss
         (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.Data
         Frame], models:Dict[str,List[str]], quantiles:numpy.ndarray,
         id_col:str='unique_id', target_col:str='y')

*Multi-Quantile loss (MQL)

MQL calculates the average multi-quantile Loss for a given set of quantiles, based on the absolute difference between predicted quantiles and observed values.

The limit behavior of MQL allows to measure the accuracy of a full predictive distribution F^τ\mathbf{\hat{F}}_{\tau} with the continuous ranked probability score (CRPS). This can be achieved through a numerical integration technique, that discretizes the quantiles and treats the CRPS integral with a left Riemann approximation, averaging over uniformly distanced quantiles.*

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsDictMapping from model name to the model predictions for each quantile.
quantilesndarrayQuantiles to compare against.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    mqloss(series, mq_models, quantiles=quantiles),
    mqloss(series_pl, mq_models, quantiles=quantiles),
    models,
)

Coverage


source

coverage

 coverage
           (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame.Da
           taFrame], models:List[str], level:int, id_col:str='unique_id',
           target_col:str='y')

Coverage of y with y_hat_lo and y_hat_hi.

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsListColumns that identify the models predictions.
levelintConfidence level used for intervals.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    coverage(series, models, 80),
    coverage(series_pl, models, 80),
    models,
)

Calibration


source

calibration

 calibration
              (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame
              .DataFrame], models:Dict[str,str], id_col:str='unique_id',
              target_col:str='y')

Fraction of y that is lower than the model’s predictions.

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsDictMapping from model name to the model predictions.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    calibration(series, q_models[0.1]),
    calibration(series_pl, q_models[0.1]),
    models,
)

CRPS

sCRPS(F^τ,yτ)=2Ni01QL(F^i,τ,yi,τ)qiyi,τdq \mathrm{sCRPS}(\hat{F}_{\tau}, \mathbf{y}_{\tau}) = \frac{2}{N} \sum_{i} \int^{1}_{0} \frac{\mathrm{QL}(\hat{F}_{i,\tau}, y_{i,\tau})_{q}}{\sum_{i} | y_{i,\tau} |} dq

Where F^τ\hat{F}_{\tau} is the an estimated multivariate distribution, and yi,τy_{i,\tau} are its realizations.


source

scaled_crps

 scaled_crps
              (df:Union[pandas.core.frame.DataFrame,polars.dataframe.frame
              .DataFrame], models:Dict[str,List[str]],
              quantiles:numpy.ndarray, id_col:str='unique_id',
              target_col:str='y')

*Scaled Continues Ranked Probability Score

Calculates a scaled variation of the CRPS, as proposed by Rangapuram (2021), to measure the accuracy of predicted quantiles y_hat compared to the observation y. This metric averages percentual weighted absolute deviations as defined by the quantile losses.*

TypeDefaultDetails
dfUnionInput dataframe with id, times, actuals and predictions.
modelsDictMapping from model name to the model predictions for each quantile.
quantilesndarrayQuantiles to compare against.
id_colstrunique_idColumn that identifies each serie.
target_colstryColumn that contains the target.
ReturnsUniondataframe with one row per id and one column per model.
pd_vs_pl(
    scaled_crps(series, mq_models, quantiles),
    scaled_crps(series_pl, mq_models, quantiles),
    models,
)