Efficient priors for self-supervised learning

With Yu Wang, JD AI Research

Efficient priors for self-supervised learning: application and theories

Remarkable progress of self-supervised learning has been taking place in the past two years across various domains. The goal of SSL method is to learn useful semantic features without human annotations. In absence of human defined labels, we expect the deep network to learn richer feature structure explained by the data itself instead of being constrained by human knowledge. Nevertheless, self-supervised learning still hinges on strong prior knowledge or human-defined pretext task to effectively pretrain the network. These prior knowledges can impose some certain form of consistency between different views of image, or be based on some pre-defined pretext task such as rotation prediction. This talk will cover our recent progress and new findings in terms of constructing useful priors for self-supervised learning (respectively published in T-PAMI and NeurIPS 2021), both from perspective of theories and practical applications. We will also introduce the SOTA mainstream self-supervised learning frameworks and the useful pretexts widely used in this field.

Join Zoom Link:

https://maths-cam-ac-uk.zoom.us/j/93331132587?pwd=MlpReFY3MVpyVThlSi85TmUzdTJxdz09 Meeting ID: 933 3113 2587 Passcode: 144696

Add to your calendar or Include in your list