Publicación: REINFORCEMENT LEARNING ALGORITHMS APPLIED TO REACTIVE AND RESISTIVE CONTROL OF A WAVE ENERGY CONVERTER

Fecha
2022
Título de la revista
ISSN de la revista
Título del volumen
Editor
2021 IEEE CHILEAN CONFERENCE ON ELECTRICAL, ELECTRONICS ENGINEERING, INFORMATION AND COMMUNICATION TECHNOLOGIES (CHILECON)
Resumen
REINFORCEMENT LEARNING (RL) TECHNIQUES ARE APPLIED IN DIFFERENT AREAS TO OPTIMIZE PARAMETERS, ONE APPLICATION IS THE USE OF RL IN THE ENERGY MAXIMIZATION OBTAINED FROM WAVE ENERGY CONVERTERS (WEC). THE MAIN ADVANTAGE OF RL IS THAT IT CAN OPTIMIZE THE GENERATION EVEN WHEN THERE ARE CHANGES IN THE WAVE AND IN THE WEC CHARACTERISTICS. Q-LEARNING AND SARSA RL-BASED APPROACHES ARE PRESENTED IN THIS WORK, IN ORDER TO OPTIMIZE A REACTIVE AND A RESISTIVE CONTROL APPLIED TO A LABORATORY-SCALE POINT ABSORBER WEC. THE PROPOSED APPROACHES ARE EVALUATED ON THREE REGULAR WAVE CONDITIONS USING A MODEL BASED ON A ONE-DEGREE OF FREEDOM SYSTEM, WHERE THE POWER TAKE OFF FORCES INCLUDE THE VARIABLE DAMPING AND STIFFNESS THAT ARE REGULATED BY THE CONTROL AND OPTIMIZED BY THE RL. RESULTS SHOWN A CORRECT BEHAVIOR OF THE RL ALGORITHMS OPTIMIZING BOTH CONTROL TECHNIQUES. NEVERTHELESS, REACTIVE CONTROL ACHIEVE UP TO 239% HIGHER ENERGY THAN THE RESISTIVE CONTROL FOR THE SAME CONDITIONS. IN RELATION WITH THE COMPARISON BETWEEN THE TWO RL ALGORITHMS, Q-LEARING PRESENT A FASTER CONVERGENCE THAN SARSA, BUT THE RESULTS FROM BOTH ALGORITHMS ARE PRACTICALLY THE SAME.
Descripción
Palabras clave
Wave energy, Resistive control, Reinforcement learning, Reactive control