Think of an industrial computer and you might imagine a large, powerful, expensive piece of hardware located within a utility control room.
This type of technology matched the utility requirements of a centralised approach to power generation, transmission and distribution.
But as monitoring and control of a modern electricity grid becomes decentralised and distributed, how is industrial computing also adapting? What is the solution to managing disparate communication protocols between distributed renewable generation, energy storage and microgrids as well as Internet of Things (IoT) devices?
Tim Munro, Head of Field Sales for Canada East at industrial computing company Advantech, believes a decentralised computing model is needed to provide reliability, safety and feasibility to a decentralised power grid.
Munro identifies three major technology adoptions that are making a decentralised computing model possible.
Standardised industrial computing protocols
Munro notes that protocol standardisation, particularly in North America, was largely achieved through Modbus and DNP protocols.
But although the simplicity of Modbus and the integrated time-stamp of DNP make them ideal for legacy controllers, these protocols were not widely adopted outside of the Americas. In addition, there are “several different flavours of Modbus that aren’t necessarily compatible”, he says.
A breakthrough here is the new European standard IEC60870-5-104, which if universally adopted, makes the running of a decentralised computing system economically viable.
“If you are running hundreds of small computers as part of a control system, using one language makes the cost much lower. For computers to run gateways to translate protocols is time consuming and costly,” says Munro.
New grid needs, new operating systems
Another enabler of decentralised computing is the availability of operating systems (OS) better suited to distributed processing.
“Before 2010, energy providers were limited to operating systems built on a centralised system based on either a server-client or even older mainframe-terminal system,” Munro explains.
He says these systems weren’t sufficiently stable for industrial computing needs and left some utilities running technology that was outdated and unable to be patched with new security updates.
The step change is the adoption of open-source operating systems based on Linux, the platform created and developed by the global computing community, as well as proprietary solutions better suited to smaller industrial computers.
Munro references Windows 10 IoT LTSB as an OS designed to run on a network of smaller computers, as well as the 100 different versions of Linux’s OS that are running critical systems from banking to industrial control.
“The key advantage of Linux is that it is scaleable to whatever size computer processor that you want to run it on and it’s a low cost solution.”
Industrial computers - costs
Making decentralised computing affordable for utilities to adopt is the third pillar of Munro’s business case for adopting a new model.
“Traditionally, industrial computers have had the reputation of high-reliability at a high price,” he says. “The only place in the power grid that could justify the expense of an industrial computer - often five times more expensive than their commercial counterparts - was for generation, transmission and some larger substations.”
And today? In the past year, Munro says the market is seeing industrial computers that can run full Windows and Linux applications for US$300 or lower.
“And these are still industrial-grade computers that can survive extremes of temperature and work near electricity transmission and distribution equipment.”
Deploying decentralised computing
Munro believes the case for decentralised computing is sound “and had to happen - how much larger could industrial computers get?” but stresses that the shift from a centralised control system to localised computing can be small and gradual.
“The advantage of undertaking IoT projects is that you can start small,” he says. “Under this model, you don’t need to take substations out of service to deploy automation technology, which obviously carries a high risk of disrupting service to your customers.”
Smaller computers can be installed at one point in the grid by shutting down the network one leg at a time to build up the network.
And there is no need to completely switch off legacy computing systems, says Munro. “Utilities can keep these as supervisory stations without control capability.”
He adds: “The key thing to remember is don’t overthink decentralised computing. Start small, gather data to see where the problems are, and then build out your computing system from that point.”