NERDS LAB

  • About
  • Research
    • Papers
    • Code / Data
  • People
    • Lab Members
    • PI-Profile
  • News
  • Contact Us

NeurIPS 2024: Revealing connections between contrastive learning and optimal transport

January 1, 2025 by Eva Dyer

Zihao and Eva traveled to Vancouver, BC to present new work entitled “Your contrastive learning problem is secretly a distribution alignment problem“!

Abstract:  Despite the success of contrastive learning (CL) in vision and language, its theoretical foundations and mechanisms for building representations remain poorly understood. In this work, we build connections between noise contrastive estimation losses widely used in CL and distribution alignment with entropic optimal transport (OT). This connection allows us to develop a family of different losses and multistep iterative variants for existing CL methods. Intuitively, by using more information from the distribution of latents, our approach allows a more distribution-aware manipulation of the relationships within augmented sample sets. We provide theoretical insights and experimental evidence demonstrating the benefits of our approach for generalized contrastive alignment. Through this framework, it is possible to leverage tools in OT to build unbalanced losses to handle noisy views and customize the representation space by changing the constraints on alignment. By reframing contrastive learning as an alignment problem and leveraging existing optimization tools for OT, our work provides new insights and connections between different self-supervised learning models in addition to new tools that can be more easily adapted to incorporate domain knowledge into learning.

Filed Under: Posts

ICML 2024: Unveiling class disparities with spectral imbalance

July 9, 2024 by Eva Dyer

Chiraag will travel to Vienna this month to present two papers!

Check out the work here:

  • C. Kaushik+, R. Liu+, C-H Lin, A. Khera, M.Y. Jin, W. Ma, V. Muthukumar, E.L. Dyer: Balanced data, imbalanced spectra: Unveiling class disparities with spectral imbalance, to appear at ICML 2024 (+co-first authors)
  • C-H Lin, C. Kaushik, E.L. Dyer+ & V. Muthukumar+: The good, the bad and the ugly sides of data augmentation: An implicit spectral regularization perspective, Journal of Machine Learning Research (JMLR), (Code, Poster) (+ co-last authors)

Filed Under: Posts

ICLR 2024: New work on data-adaptive position embeddings for timeseries transformers

June 3, 2024 by Eva Dyer

Eva traveled to Vienna for ICLR 2024 to present our work on new adaptive position embeddings for timeseries transformers.

Check out the paper!

Filed Under: Posts

Check out this new visualization tool for behavior modeling!

May 9, 2024 by Eva Dyer

As part of an ongoing collaboration with Chris Rodgers lab, we created an interactive visualization tool for exploring behavioral keypoint datasets and the embeddings discovered by our new method for behavior modeling – BAMS!

 

Filed Under: Posts

New paper on the theory of data augmentation in JMLR!

April 8, 2024 by Eva Dyer

Henry and Chiraag’s paper on data augmentation is out in JMLR! Check out the paper!

Filed Under: Posts

New paper on data-adaptive latent augmentation to appear at WACV!

January 6, 2024 by Eva Dyer

Ran and Jingyun will travel to Hawaii to present Latent DR at WACV!

Abstract:  Despite significant advances in deep learning, models often struggle to generalize well to new, unseen domains, especially when training data is limited. To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples to guide the augmentation procedure. Our approach first degrades the samples stochastically in the latent space, mapping them to augmented labels, and then restores the samples from their corrupted versions during training. This process confuses the classifier in the degradation step and restores the overall class distribution of the original samples, promoting diverse intra-class/cross-domain variability. We extensively evaluate our approach on a diverse set of datasets and tasks, including domain generalization benchmarks and medical imaging datasets with strong domain shift, where we show our approach achieves significant improvements over existing methods for latent space augmentation. We further show that our method can be flexibly adapted to long-tail recognition tasks, demonstrating its versatility in building more generalizable models. Code is at https://github.com/nerdslab/LatentDR.

 

Filed Under: Posts

  • 1
  • 2
  • 3
  • …
  • 7
  • Next Page »

Recent Posts

  • NeurIPS 2024: Revealing connections between contrastive learning and optimal transport January 1, 2025
  • ICML 2024: Unveiling class disparities with spectral imbalance July 9, 2024
  • ICLR 2024: New work on data-adaptive position embeddings for timeseries transformers June 3, 2024
  • Check out this new visualization tool for behavior modeling! May 9, 2024
  • New paper on the theory of data augmentation in JMLR! April 8, 2024
  • New paper on data-adaptive latent augmentation to appear at WACV! January 6, 2024
IMG_2521
  • About
  • Research
  • People
  • News
  • Contact Us

Copyright © 2025 · Minimum Pro on Genesis Framework · WordPress · Log in