This week’s blog comes from Jochen Cremer of the Department of Electrical and Electronic Engineering. Earlier today, he was the speaker at our weekly seminar covering his research on increased automation of the electricity grid with support from new machine learning techniques. He has written this blog to cover the same topics and you can download his slides as a PDF.
In 2019, climate change has become a central debate in society, with many now declaring it an emergency. While politicians see, to have decided to move moderately towards net-zero carbon emissions, demonstrations such as Extinction Rebellion the Youth Strikes have built-up momentum.
Recently, the UK set a target of net-zero carbon emissions by 2050, for the electricity grid, this means replacing fossil fuels by renewables or nuclear resources along this 30-years journey. Integrating that level of renewables in the existing grid presents many challenges. The two key ones are around the demand/generation balancing and the intermittent nature of renewables.
The grid: A balancing act
The key thing to remember is that with the electricity grid, the amount of energy generated must match the amount requested, flicking a light switch should turn on a light, with fossil fuel generation this was solved by just burning more fuel, it is not that simple with renewables. This is related to the problem of intermittency, sometimes there is more, or less, energy being generated by renewables than needed and the use of energy storage to meet this differences is still not a solved problem.
At the same time, the grid infrastructure is also directly affected by climate change. Electricity grids in many countries were designed over the last century when climate conditions were different from the ones of today. Changing these conditions stresses the system. The weather is more uncertain and more extreme weather events such as storms will occur. This may result in power blackouts that can be extremely dangerous for society (depending on their extent).
We can find solutions to the changing climate and their implications on the grid, but we need concerted, collaborative, efforts. This will mean using the flexibility of everyone from private consumers and the manufacturing industry to corporate offices and local communities to balance demand and generation at all times. We will need many more distributed devices like batteries, heat pumps or electric vehicles on the system that can manage the balancing act of demand and generation. These will come with added benefits of also addressing other challenges in other areas like transport and heating.
Distributing demand: A blessing and a curse
For the grid, these distributed devices are both a blessing and a curse at the same time. The curse, like renewables is they will introduce more dynamics and uncertainties to the system and may lead to instabilities in the operations. As mentioned, to avoid instabilities and power blackouts is of very high priority as it has serious implications on society. The blessing is that these distributed devices act as distributed sensors that deliver data that we can learn from and flexibility that we can control and make use of to increase profitability and most importantly: to ensure stable operation of the grid.
If it would be possible to use this data for predicting beforehand, like whether power blackouts may occur would mean to use the blessing to address the curse. When we find novel data-driven methods that can allow higher shares of renewables in the system, then we can reduce the dependency on fossil fuels and move to net-zero carbon emissions faster. Although novel machine learning methods may open up possible pathways to analyse this vast amount of data, several barriers lay along the way.
Machine learning: Not a panacea
The key barriers result from the fact that machine learning is not based on physical laws. In physics-based approaches, we typically describe the system response with a model that is accurate enough under specified assumptions. We understand the entire chain from the underlying assumptions of these models, the intuition of the physics and can quantify the accuracy and confidence.
When we use machine learning, the user lacks understanding and intuition. Although machine learning has a fundamentally different approach that is data-driven, it is neither magic nor should be considered simply as a brute force approach. In essence, it is doing something similar to physics-based models: approximating the actual system response. However, it is of key importance to manage expectations of the type of questions (and answers) one can obtain from using machine learning methods. This is important as the underlying assumptions are very different from physics-based methods (more of a probabilistic nature) and it is the first step to find the right questions that machine learning is able to answer.
In machine learning, we start with observations of the system’s response and design a machine in such a way that it can learn from these observations. Then, the idea is that once accurate enough, the machine can be used to predict the system response. However as with many computational tasks it is a case of “Garbage In = Garbage Out.” If the learning workflow is poorly designed and wrong assumptions are placed, then the machine is unable to predict anything useful.
In the example of grid operation, these inaccurate predictions can have a very severe impact for instance, for example failing to predict a major black out. However, when used correctly machine learning can be very powerful that can handle fast amounts of data and more effectively operate the system. Hence, it is a very worthwhile roadmap to investigate.
The way forward
On the roadmap towards a fully automated and digitalised grid, an intermediate step is to apply machine learning by the combination with established methodologies that are physics-based. In this intermediate step, the user (system operator) can build up trust in these by using common tools that are based on physics in a more effective way. To build up this trust and experience is of key importance in such a critical task as handling and avoiding power blackouts.
However, if this critical task of avoiding power blackouts is not better addressed by using these techniques, we must either limit the share of renewable energy in the system or significantly invest in the grid infrastructure to build up redundancies in the assets. Taking either of these two pathways to get to net-zero carbon would be very expensive. That is why data-driven methods offer the opportunity to keep up with UK’s 2050 target and to make the journey to net-zero a cheaper one while keeping the lights on.
Jochen is a final year PhD student in the Control and Power Group (CAP) of the Department of Electrical and Electronic Engineering at Imperial College London. Before joining CAP group in 2017 he undertook research in mathematical optimization and control theory at Carnegie Mellon and MIT. He is an engineer at heart and holds an M.Sc. in Chemical Engineering, a B.Sc. in Electrical Engineering and a B.Sc. in Mechanical Engineering from RWTH Aachen University, Germany.
In 2019 he co-chaired the International Student Energy Summit 2019 for 650 students passionate on transforming the current system to low carbon. Jochen’s research interests lie in the intersect of machine learning and mathematical optimization applied to the operation of the power system.