Reinforcement Learning Research

NREL's reinforcement learning (RL) research focuses on large, complex energy systems that challenge traditional methodology, including fleet management and controls for wind farms, grid-interactive buildings, autonomous vehicles, and smart homes.

To facilitate the use of RL for these increasingly large, complex systems across the board, NREL researchers are leveraging NREL's state-of-the-art computing resources.

Two researchers view a data visualization display of a multi-timescale simulation for evaluating real-time grid control challenges.
Photo by Dennis Schroeder

High-Performance Computing for Reinforcement Learning

As RL is applied to larger, more complex, and more realistic systems, implementing RL algorithms also becomes more complex. In these cases, RL requires specialized graphics processing units and many more cores than can be found on a standard laptop.

NREL researchers have developed deep expertise in leveraging NREL's high-performance computing (HPC) facilities to implement RL algorithms. The research team has built out functionality to run an RL algorithm on multiple nodes of NREL's Eagle computing system in parallel with the open-source RL library, RLLib.

A Framework for Multi-Agent Reinforcement Learning in Power Systems

NREL computational science researchers are developing software to facilitate the application of RL in various settings. One notable example is PowerGridworld, an open-source software package that provides users with a modular, customizable framework for multi-agent RL focused on power systems. Many frameworks exist for multi-agent RL, but none can rapidly prototype and develop the environments themselves, making PowerGridworld a valuable tool for power-systems-focused RL and a promising avenue for future collaborations.

Designing Hybrid Reinforcement Learning Approaches

NREL researchers are exploring ways to solve energy systems control problems by combining RL with traditional approaches. Although RL has several advantages—including bypassing the need for a mathematical model of the underlying system—it has disadvantages and challenges as well. By combining traditional approaches with novel RL methods, NREL researchers are bringing together the advantages of each and developing new capabilities for solving energy systems control problems.

Featured Projects

Learn about some RL projects NREL researchers are undertaking.

Energy-Efficient Buildings

NREL is using RL to increase energy efficiency in a variety of applications, including buildings. Many recent advances in building operation rely on algorithms that are highly sensitive to inaccuracies in the underlying building models, making this approach difficult to replicate across building types and hindering scalability. NREL researchers are using RL to tackle these shortcomings. Because RL does not require knowledge of the underlying building model, it does not suffer from the same replicability and stability issues as traditional, model-driven techniques. The anticipated impacts of this work are far-reaching. They include improving building efficiency, understanding building occupancy patterns, and increasing grid stability and resiliency.

Next-Generation Compression-Ignition Engines

Another RL-related research project focuses on increasing the efficiency of compression-ignition engines. Conventional compression-ignition engines have high thermal efficiencies, but the extensive exhaust gas treatments they require have the opposite effect, decreasing efficiency. One promising strategy for decreasing emissions involves splitting the fuel injection into a series of smaller injections. NREL researchers are using RL to optimize this process and maximize the engine's performance while decreasing emissions.

Battery Materials Discovery

NREL researchers are developing an RL-based approach to design new battery materials. Currently, candidate material compositions must be identified manually—an expensive, time-consuming process. RL techniques are used to automate the identification of potential new material compositions, making it faster and easier to identify new battery materials and facilitating rapid improvements in battery performance.

Publications

Two-Stage Reinforcement Learning Policy Search for Grid-Interactive Building Control, IEEE Transactions on Smart Grid (2022)

An Edge-Cloud Integrated Solution for Buildings Demand Response Using Reinforcement Learning, IEEE Transactions on Smart Grid (2020)

Transferable Reinforcement Learning for Smart Homes, 1st International Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (2020)

Open-Source Software

Rlc4clr: Reinforcement Learning Controller for Critical Load Restoration Problems

Hybrid-rl-mpc4clr: Hybird Reinforcement Learning Model Predictive Control for Reserve Policy Assisted Critical Load Restoration in Distribution Grids

Learning Building Control

PowerGridworld

Contact

Matthew Reynolds

Researcher IV, Applied Mathematics

Matthew.Reynolds@nrel.gov
303-275-4486

Xiangyu Zhang

Researcher III, Applied Mathematics

Xiangyu.Zhang@nrel.gov
303-275-4068

Share