**Open Review**. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page

## 6.4 Smoothing parameters bounds

Similar to the pure additive ADAM ETS, it is possible to have different types of restrictions on smoothing parameters for pure multiplicative models. However, in this case, the classical and the usual restrictions become more reasonable from the point of view of the model itself, while the derivation of admissible bounds becomes a challenging task. Consider the ETS(M,N,N) model, for which the level is updated using the following relation:
\[\begin{equation}
l_t = l_{t-1} (1 + \alpha\epsilon_t) = l_{t-1} (1-\alpha + \alpha(1+\epsilon_t)).
\tag{6.15}
\end{equation}\]
As discussed previously, the main benefit of pure multiplicative models is in dealing with positive data. So, it is reasonable to assume that \((1 + \epsilon_t)>0\), which then implies that the actual values will always be positive, and that each component of the model should also be positive. This means that \(\alpha(1 + \epsilon_t)>0\), which implies that \((1-\alpha + \alpha(1+\epsilon_t))>1-\alpha\) or equivalently based on (6.10) \((1 + \alpha\epsilon_t)>1-\alpha\) should always hold. In order for the model to make sense, the condition \((1 + \alpha\epsilon_t)>0\) should hold as well, ensuring that the level is always positive. Connecting the two inequalities, this can be achieved when \(1-\alpha \geq 0\), meaning that \(\alpha \leq 1\). Furthermore, in order for the level to be positive irrespective of the specific error on observation \(t\), the smoothing parameter should be non-negative. So, in general the bounds \([0, 1]\) guarantee that the model ETS(M,N,N) will produce positive values only. The two special cases \(\alpha=0\) and \(\alpha=1\) make sense, because the level in (6.15) will be positive in both of them, implying that for the former the model becomes equivalent to the global level, while for the latter the model is equivalent to Random Walk. Using similar logic, it can be shown that the **classical restriction** \(\alpha, \beta, \gamma \in [0, 1]\) guarantees that the model will always produce positive values.

The more restrictive condition of the **usual bounds**, discussed in Section 4.6 makes sense as well, although it might be more restrictive than needed. But it has a different idea: guaranteeing that the model exhibits averaging properties.

Finally, the **admissible bounds** might still make sense for the pure multiplicative models, but the condition for parameters bounds becomes more complicated and implies that the distribution of the error term becomes trimmed from below in order to satisfy the classical restrictions discussed above. Very crudely, the conventional restriction from pure additive models can be used to get an approximation to the proper admissible bounds, given the limit (6.5), but this should be used with care, given the discussion above.

From the practical point of view, the pure multiplicative models typically have low smoothing parameters, close to zero, because they rely on multiplication of components rather than on addition, so even the classical restriction might seem wide in many situations.