Precautionary Unfairness in Self-Supervised Contrastive Pre-training

Abstract

Recently, self-supervised contrastive pre-training has become the de facto regime, that allows for efficient downstream fine-tuning. Meanwhile, its fairness issues are barely studied, though they have drawn great attention from the machine learning community, where structured biases in data can lead to biased predictions against under-presented groups. Most existing fairness metrics and algorithms focus on supervised settings, e.g., based on disparities in prediction performance, and they become inapplicable in the absence of supervision. We are thus interested in the challenging question: how does the pre-training representation (un)fairness transfer to the downstream task (un)fairness, and can we define and pursue fairness in unsupervised pre-training? Firstly, we empirically show that imbalanced groups in the pre-training data indeed lead to unfairness in the pre-trained representations, and that cannot be easily fixed by fairness-aware fine-tuning without sacrificing efficiency. Secondly, motivated by the observation that the majority group of the pre-training data dominates the learned representations, we design the first unfairness metric that can be applicable to self-supervised learning, and leverage that to guide the contrastive pre-training for fairness-aware representations. Our experiments demonstrate that the underestimated representation disparities strike over 10% surges on the proposed metric and our algorithm improves 10 out of 13 tasks on the 1%-labeled CelebA dataset. Codes will be released upon acceptance.

Publication
Preprint
Junyuan Hong
Junyuan Hong
Postdoctoral Fellow

My research interest lies in the interaction of human-centered AI and healthcare.

Related