NERDS LAB

  • About
  • Research
    • Papers
    • Code / Data
  • People
    • Lab Members
    • PI-Profile
  • News
  • Contact Us

Mine your own view: Self-supervised learning through across-sample prediction

October 22, 2021 by Eva Dyer

Check out our new preprint where we introduce a new method for self-supervised learning and show its promise in building representations of multi-unit neural activity!

Link to preprint: https://arxiv.org/abs/2102.10106

Link to code: https://github.com/nerdslab/myow

Abstract:   State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed “views” of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample’s latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.

Related

Filed Under: Posts

Recent Posts

  • NeurIPS 2024: Revealing connections between contrastive learning and optimal transport January 1, 2025
  • ICML 2024: Unveiling class disparities with spectral imbalance July 9, 2024
  • ICLR 2024: New work on data-adaptive position embeddings for timeseries transformers June 3, 2024
  • Check out this new visualization tool for behavior modeling! May 9, 2024
  • New paper on the theory of data augmentation in JMLR! April 8, 2024
  • New paper on data-adaptive latent augmentation to appear at WACV! January 6, 2024
IMG_2521
  • About
  • Research
  • People
  • News
  • Contact Us

Copyright © 2025 · Minimum Pro on Genesis Framework · WordPress · Log in