Squaring the data , analytics and processing circle: the elements of a solution.

The bottom line is that traditional enterprise database technologies will no longer suffice. The volume and complexity of data, when coupled with the levels of performance demanded from systems that use such data, have reached a point where simply spending more doing the same things better, faster, in greater volume has a marginal return.
Published: Wed 31 Oct 2012

This is in part due to the step change in data volumes as well as interaction effects. Data, communications technology and analysis are no longer separate projects. Rather, data is modified by the communications medium by which it is supported. Also analytical and control software is as much about determining what data is being observed as in processing discrete units of data.

First Generation Technologies

The first generation big data technologies have been created by companies needing a quick fix provision of enterprise data management solutions at web scale. Examples of these technologies include the Facebook/Cassandra database, Four Square/Mongo database and Google’s MapReduce programming model.

When applied to the Utilities ecosystem there is concern that these solutions may be sub-optimal. A number of open source projects have taken up these data technologies, but lacking enterprise infrastructure standards there is a risk that these solutions will turn out to be relatively immature, special purpose products.

Stochastic and statistical methods of data analysis

Fault diagnosis methods have tended not to consider either the significant influence of ICT in power systems, or to actively seek to detect and distinguish between faults in distribution networks and failures in communication systems. The complexity of their underlying systems is leading utilities to rely increasingly on a range of techniques specifically adapted for dealing with uncertainty.

This search has led to the development of fuzzy logic approaches to power system fault diagnosis, enabling operators to model inexactness and uncertainties created by protection device operations and incorrect data. One instance of fuzzy set theory being used derives from a case study looking at a method for fault section estimation that considers the network topology under the influence of a circuit breaker tripped by a preceding fault.

The application of fuzzy set theory to the network matrix is central to dealing with uncertainties created due to protection devices, and allowed an examination of the relationship between the operated protective devices and the fault section candidates.

In classical set theory, the membership of elements in a set is assessed in binary fashion where an element either belongs or does not belong to the set. Fuzzy set theory permits a degree of indeterminacy, with elements allocated to a set, according to some known or estimated probability function and is widely used in areas where information is incomplete or imprecise, such as bioinformatics.

A further technique that has been explored in dealing with smart grids is the Petri Net (PN). This is a mathematical model that allows not only the representation and description of an overall process but also the modeling of the process evolution in terms of its new state after each event has taken place.

Stochastic control is a form of control theory that may be applied under conditions of uncertainty in the data. It is assumed some random noise - with known probability distribution - is affecting both the state evolution and the observation of the controllers and the aim of stochastic control is therefore to design an optimal controller that achieves the control objective at minimum average cost despite the fundamentally “noisy” operating environment.

Adaptive Stochastic Control

Management of a smart grid is constantly subject to a wide range of variables and its control, computationally, is characterized as a multistage, time variable, stochastic optimization problem. One solution is to make use of complex, computationally driven, command and control systems such as Adaptive Stochastic Control (ASC).

ASC systems are at present only found in nuclear power plant management, but even these would be insufficient, since while they are good at identifying the “next worst” condition that a plant can take at any given time, they are less good for determining “next most likely”.

The ASC for the Smart Grid must identify the next worst AND the next most likely condition and will employ algorithms that perform complex mathematics using model simulations of the future in near real time. Such solvers are more common in military, petrochemical and transportation industries.

By contrast, in the utility industry, only Independent System Operators use such complex algorithms, and then only for economic dispatch of power. This means that in order to manage the smart grid – and avoid catastrophic failure - these advanced ADP control algorithms will need to be modified and significantly enhanced.

Computational intelligence

Smart grids will need to be supported by means of a combination of capabilities for system state prediction, dynamic stochastic power flow, system optimization, and solution checking.

One proposed solution is that of multi-agent systems, where each individual component of the system is some kind of intelligent agent. Alternatively, different systems or agents can be allocated to the solution of specific problems.

However, various disciplines, including gametheory and economics, have shown that this approach is likely to converge, at best, to something called a Nash equilibrium where no player is incentivized to change their strategy provided no other player changes strategy, and this in general will be far inferior to any of the best possible Pareto optima outcomes where no player becomes better off without another player suffering.

This outcome can be avoided only if some special effort is made to design the overall system to achieve some kind of collective optimality. That effort is one of the key defining elements of the fourth generation vision, embodied in the concepts of computational systems thinking and computational intelligence.

Computational systems rely on three strands of thinking and three agents (known as C3) to deal with an evolving, uncertain, variable and complex environment such as the smart grid. These are:

  • sense making, communication agents
  • decision making, computation agents
  • adaptation, control agents

At the heart of this lies a real time wealth of knowledge that continuously evolves and refines itself as the system undergoes changes, learning and unlearning facts and insights over time. The typical paradigms for this Computational Intelligence are, independently, the same as the core techniques in.

  • neural networks
  • immune systems
  • swarm intelligence,
  • evolutionary computation systems
  • fuzzy systems

These paradigms need to be combined or “hybridized” to form neuro-fuzzy systems, neuro-swarm systems, fuzzy PSO systems, fuzzy-GA systems, neuro-genetic systems, etc. to be generally superior to any one of the stand alone variants and closer the solution needed for a smart grid.