Abstract
We formulate a unifying framework for unsupervised continual learning (UCL), which disentangles learning objectives that are specific to the present and the past data, encompassing stability, plasticity, and cross-task consolidation. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, Osiris, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel ones proposed in this paper featuring semantically structured task sequences. Finally, we show some preliminary evidence that continual models can benefit from such more realistic learning scenarios.
Original language | English (US) |
---|---|
Pages (from-to) | 388-409 |
Number of pages | 22 |
Journal | Proceedings of Machine Learning Research |
Volume | 274 |
State | Published - 2024 |
Event | 3rd Conference on Lifelong Learning Agents, CoLLAs 2024 - Pisa, Italy Duration: Jul 29 2024 → Aug 1 2024 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability