Kolloquiumsvortrag: 29. Juni 2021, Johanna Brosig

Bild Besprechungsraum 04.137
Bild der Präsentationsfläche
Deep reinforcement learning enables modeling complex control problems. Thereby, agents learn how to act to maximize their reward. In this work, microgrids are considered, modeled as a multi-agent reinforcement learning problem. The aim is to learn a policy such that the energy consumption within the microgrid is covered by participating batteries, generators, and photovoltaic systems.
Learning this policy is challenging. Deep learning models require many parameters. Therefore, the training becomes computationally expensive as much training data and many iterations are required. Moreover, the training success crucially depends on the choice of the correct hyperparameters.
In this work, ways to speed up the training procedure are evaluated. Performance guidelines and concurrency are implemented and assessed.
Experience replay was found to stabilize training, yet it faces difficulties in a multi-agent setting as the environment becomes nonstationary. It is assessed whether experience replay can improve the training procedure in our case.
Furthermore, hyperparameter optimization is implemented to enable simplified identification of suitable hyperparameters.
The experiments show that by amending the implementation, especially by introducing concurrency, significant speedups can be achieved. The runtime could almost be halved. However, experience replay did not improve the training.
Zeit: 10:15 Uhr

Zoom-Meeting beitreten
https://fau.zoom.us/j/66368769991?pwd=UkhHcUdPTnpTLzVhUEdXa09CWjZMQT09

Meeting-ID: 663 6876 9991
Kenncode: 405300
Schnelleinwahl mobil