BaseN’s STORY

STORY is a project about energy storage and smart energy grids, but behind any modern project, there is also an ICT infrastructure. The project has multiple different sites, each with their own internal ICT system ranging from industrial strength SCADA-systems to custom built end user installations with traditional smart meters and LoRa equipment. All this equipment produces data for KPI analysis and remote long-term optimization software and receives control messages from remote systems.

BaseN platform sits in the middle of this, collecting data from all other platforms, storing and passing it along. Currently the system consists of an admin portal for setting everything up, seven end user portals for data visualization and KPI reporting and ten odd interfaces for data import and export. Data is collected with multiple different protocols (OPC-UA, ETSI m2m,..), passed through an abstraction layer which unifies it to a common time series format and then stores it to the common data storage. All data is saved raw, so no lossy compression or aggregation is applied, so that later research can have as full access as possible.



Figure 1 server racks


The system runs on a couple of rack servers. One for collecting data and two redundant ones for storing and processing data. The system is based on service discovery and tries to utilize the computation resources as effectively as possible. If more resources like cpu or disk are needed, new servers can be added with a minimal downtime to the system, as all parts can handle situations where they don’t see other servers or services.
All sites are monitored for device availability and any locally detected errors and faults. This informs admins in real time and also later helps with preventive maintenance.
It is also vital for control purposes to know the exact state of the system before and after commands. The system also monitors itself, sending this data to a separate support platform, making sure that the admins are aware if anything out of order is happening.



Figure 2 real time monitoring of all measurements


Currently there are roughly 140 import or export measurements per second (12 million per day, 4.4 billion per year) and 176 individual devices and 861 sensors or data interfaces are monitored for availability or status. The amount of data is compressed in format 1.1 gigabytes, with an average compression rate of over 95%. This data is used as an input for both KPI calculations and predictive models. Models also feed their data back to the system and results can be later compared against what really happened.



Figure 3 Incoming measurement details


All the sites have their individual portals, showing the current status and providing drill-in functions to view more detailed data. An overview map shows different subsites/measurement points and clicking on one gets the user to the actual measurements where the user can drill in to see historical details.



Figure 4 Map overview



Figure 5 Weekly view



Figure 6 zoom to history


And finally the forecasts and model results are visualized, so that they can be easily compared by the end users.