Data Archiver

Data Archiving is part of the general Hyper Historian process. It makes it easier to access historical data using a high performance utility with a flexible architecture. Data Archiving in Hyper Historian can be configured via the Workbench. It currently supports both Azure SQL and Azure Data Lake.

General Features

  • Data synchronization is scheduled by Global Triggering system

  • Manual synchronization is supported (similar to re-calculation tasks)

  • Dataset based export

  • Support for miscellaneous data storages

  • Dataset filters and Data Storage connections can be aliased

Datasets

  • Consists from Column and Filter Definitions (equivalent to SELECT and WHERE clause in SQL query)

  • Columns in datasets are defined by users

    • Can combine metadata, raw or aggregated values

    • Expression-based columns are supported

  • One dataset row can contain values from a single data point

  • Elements of value arrays can be mapped to separate columns

    • Performance calculations can be used to create row with values from multiple data points

  • Multiple aggregates of the same tag can be mapped to different columns

  • Data Point Filters can be aliased

Storages

  • File-based or table-based

    • Textual files formatted as CSV

  • Data can be organized based on time schedule (e.g. create new file every Monday, every day, etc.)

  • Connection string can be aliased

  • Storages supported

    • SQL Server

    • Azure SQL

    • Data Lake

Tasks

  • Topmost entity – connects datasets and data storage

  • Scheduling

  • Alias definition

  • Multiple Datasets can be synchronized by a single task

  • Manual Synchronization is based on Tasks

Configuration

  • Data Archiver extension is disabled by default.

  • To enable it, configuration structure has to be added to the Hyper Historian configuration.

  • In Workbench, select “Configure Database” and install “Hyper Historian – Data Archiver”.

See Also:

Storage

Datasets

Tasks