Abstract
Deep learning models, which require abundant labeled data for training, are expensive and time-consuming to implement, particularly in medical imaging. Active learning (AL) aims to maximize model performance with few labeled samples by gradually expanding and labeling a new training set. In this work, we intend to learn a "good" feature representation that is both sufficient and minimal, facilitating effective AL for medical image classification. This work proposes an efficient AL framework based on off-the-shelf self-supervised learning models, complemented by a label-irrelevant patch augmentation scheme. This scheme is designed to reduce redundancy in the learned features and mitigate overfitting in the progress of AL. Our framework offers efficiency to AL in terms of parameters, samples, and computational costs. The benefits of this approach are extensively validated across various medical image classification tasks employing different AL strategies.
Original language | English (US) |
---|---|
Pages (from-to) | 419-439 |
Number of pages | 21 |
Journal | Proceedings of Machine Learning Research |
Volume | 234 |
State | Published - 2024 |
Event | 1st Conference on Parsimony and Learning, CPAL 2024 - Hongkong, China Duration: Jan 3 2024 → Jan 6 2024 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability