A key feature of particle filters is their sequential nature, which supports **online estimation**: after conditioning on observations :math:`\{y_1, \dots, y_N\}`, when a new observation :math:`y_{N+1}` is received, we only need to account for this new observation — there is no need to restart from :math:`y_1`.
This can be achieved in pypfilt_ by **defining a cache file** for each scenario.
For example, we can define cache files for the two forecasting scenarios that have been introduced in this tutorial by adding the following lines to the :ref:`scenario file <lorenz63-multi>`:
.. code-block:: toml
[scenario.forecast.files]
cache_file = "cache_forecast.hdf5"
[scenario.forecast_regularised.files]
cache_file = "cache_forecast_regularised.hdf5"
When a cache file is defined, :func:`pypfilt.forecast` will save the particles at each forecast time.
For example, the following example will save the particles at :math:`t = 10`, :math:`t = 15`, and :math:`t = 20`:
.. code-block:: python
forecast_times = [10, 15, 20]
pypfilt.forecast(context, forecast_times)
These saved particles can then be reused when generating new forecasts:
.. code-block:: python
forecast_times = [25, 30]
pypfilt.forecast(context, forecast_times)
.. note::
The above example will also save the particles at :math:`t = 25` and :math:`t = 30`.
:func:`pypfilt.forecast` identifies the most recently saved particles that are **consistent with the new observations**.
In the above example, if the observations at time :math:`t = 20` were updated before generating the forecast at time :math:`t = 25`, pypfilt_ will resume from the particles saved at time :math:`t = 15`.
This ensures that when past data are modified, corrected, etc, these changes will be reflected in the generated forecasts.