Skip to content
English
  • There are no suggestions because the search field is empty.

Store And Froward Operation

Store and Forward is a built-in reliability feature of the Timebase Collector

Overview

Store and Forward is a built-in reliability feature of the Timebase Collector. When the Historian is temporarily unavailable — due to a network outage, a planned maintenance window, or a service restart — the Collector automatically buffers data in memory rather than dropping it. When the Historian becomes available again, the buffered data is sent in order. No data is lost, and no configuration is required to enable this behaviour.

How Store and Forward works

During a Historian outage

  1. The Collector's data source (OPC UA, MQTT, etc.) continues delivering data points to the Collector.
  2. The Collector detects that the Historian is unreachable and activates Store and Forward for the affected dataset.
  3. Incoming data accumulates in an in-memory buffer. You can see the buffer size growing in the Collector UI.
  4. The Collector continues attempting to reconnect to the Historian in the background.

When the Historian becomes available again

  1. The Collector detects the Historian is back online and deactivates Store and Forward for that dataset.
  2. The buffered data is sent to the Historian in chronological order.
  3. Because the data arrives with timestamps earlier than the most recent stored point, the Historian treats it as late data and routes it through the .late file merge process. See the Late Data Handling article.
  4. During replay, the Dataset.Writes.Late system tag will be non-zero. It returns to zero when the backfill is complete.

Buffer persistence across Collector restarts

If the Collector is stopped gracefully (controlled shutdown) while buffering data, the buffer is automatically saved to disk before shutdown. On next startup, the Collector reads the saved buffer and resumes delivery without losing any buffered points.

The Collector creates the following files in its Data folder to persist the buffer:

File What it contains Lifecycle
<dataset>.state The main send buffer — data that was waiting to be delivered to the Historian when the Collector shut down. Written on graceful shutdown if the buffer is non-empty. Read on startup to resume delivery. Deleted automatically after successful delivery.
<dataset>.faulted The failed-to-send buffer — data that was sent to the Historian but received an error response. Written when the Historian returns errors during delivery. Read on startup. Deleted automatically after the data is successfully delivered on the next attempt.

Both files are stored in the Collector's Data path:

  • Windows: C:\ProgramData\Flow Software\Timebase\Collector\Data\
  • Docker: /collector/data (inside the Collector Docker Volume)

You will not normally see these files during steady-state operation — they are created and deleted automatically. If you see them on disk while the Collector is running, Store and Forward is active or recovery is in progress.

Monitoring Store and Forward

Two indicators show Store and Forward activity:

  • Collector UI — Buffer size: The Collector browser page shows the current buffer size in points for each dataset. A growing buffer means the Historian is unreachable. A shrinking buffer means replay is in progress.
  • Dataset.Writes.Late system tag: Non-zero during replay. Returns to zero when the backfill is complete. Monitor this tag in Explorer or via the Historian API to confirm replay has finished.

Troubleshooting

Symptom Likely cause What to do
Buffer size keeps growing and never drains after the Historian comes back online The Collector is not successfully reconnecting to the Historian — it may still be seeing connection errors Check the Collector log files for connection errors. Confirm the Historian is running and reachable from the Collector's host at the configured address and port. In Docker, confirm the Historian container name matches what the Collector is configured to target
.state or .faulted files remain on disk after the Collector has been running for a while Store and Forward is still active, or delivery is failing These files are deleted automatically once all buffered data is successfully delivered. If they persist, check the Collector log files for delivery errors. Do not delete these files manually while the Collector is running — doing so will lose the buffered data
After a Collector restart, some data from before the restart is missing from the Historian The Collector was stopped unexpectedly (power loss, process kill) rather than gracefully, so the in-memory buffer was not saved to disk The buffer is saved on graceful shutdown only. Always stop the Collector via the Services Starter or docker compose down rather than killing the process. If data is missing due to a crash, it cannot be recovered from the Collector — check whether the data source can retransmit it
The Collector UI shows "Store and Forward active" but the Historian appears healthy The Collector's target Historian address or port does not match the running Historian Open the Collector config, find the Target Historian entry, and confirm the host and port are correct. In Docker, the hostname should be the Historian container name (e.g. historian), not localhost
Dataset.Writes.Late stays elevated for hours after connectivity is restored A large volume of buffered data is being replayed — this is expected for long outages Allow the replay to complete. The rate at which buffered data replays depends on network speed and Historian write throughput. Do not stop the Collector during an active replay