The ability to learn is an essential aspect of future intelligent systems that are facing uncertain environments. However, the process of learning a new model or behavior often does not come for free but involves a certain cost. For example, gathering informative data can be challenging due to physical limitations, or updating models can require substantial computation. Moreover, learning for autonomous agents often requires exploring new behavior and thus typically means deviating from nominal or desired behavior. Hence, the question of when to learn is essential for the efficient and intelligent operation of autonomous systems.
Event-triggered learning (ETL) was the first time proposed in [ ] for making principled decisions on when to learn new dynamics models and applied for efficient communication in distributed systems. Information exchange in distributed systems is a key aspect in solving collaborative tasks. Communication often takes place over wireless networks, and therefore, needs to be used carefully to avoid overloading the network. Dynamical models are deployed to predict other agents' behavior and therefore, accurate models are essential to reduce communication effectively. We developed a stochastic trigger to decide when to learn a new model and derive statistical guarantees that the triggering happens at the right time.
While effectively reducing communication in distributed systems, the developed ideas are more general and address in their core the question when to learn with possible extensions in different directions. Among others, we investigate generalization to partially observable and nonlinear systems [ ]. Using different performance signals to perform structured learning decisions is another important aspect of ongoing research, which also connects to reinforcement learning, where the exploration-exploitation trade-off is a closely related manifestation of the same problem.