IEEE Transactions on Emerging Topics in Computational Intelligence / 2 April 2025
Learn to Adapt: A Policy for History-Based Online Adaptation
Although today's reinforcement learning-based control agents operate effectively in ideal situations, their performance can significantly decrease under changing conditions. This paper addresses this limitation by proposing an experience-based online adaptation framework. It is designed to enable agents to adjust their policies to differing domains and scenarios by leveraging the information embedded in past state-action transitions. The method extracts these extra features via a history-based adaptor module and introduces them as supplementary input for the agents. It also contains a state encoder network, improving the representation capabilities and supporting the adaptation process. Furthermore, the introduced modules can replace hard-to-obtain information during deployment (e.g., sensor or domain-specific data), which facilitates real-world applications. To emphasize the contribution of our work, we evaluate the solution within highly dynamic environments where the changing physical parameters necessitate diverse control strategies. Our approach significantly improves conventional algorithms and results in comparable performance with agents receiving privileged information.