NERDS LAB

  • About
  • Research
    • Papers
    • Code / Data
  • People
    • Lab Members
    • PI-Profile
  • News
  • Contact Us

NeurIPS 2021 – Swap-VAE selected for an Oral !

November 22, 2021 by Eva Dyer

Our new paper on building unsupervised representations of neural activity was accepted for an oral presentation at NeurIPS!

Check out the paper here: https://www.biorxiv.org/content/10.1101/2021.07.21.453285v1.full

Check out the code here:
https://github.com/nerdslab/SwapVAE

__

Abstract:  Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Filed Under: Posts

Michael is awarded a PURA from Georgia Tech!

November 20, 2021 by Eva Dyer

Michael was recently selected to receive a Presidential Undergraduate Research Award (PURA) to support his work in building models of human decision making. Way to go Michael!

Filed Under: Posts

Mine your own view: Self-supervised learning through across-sample prediction

October 22, 2021 by Eva Dyer

Check out our new preprint where we introduce a new method for self-supervised learning and show its promise in building representations of multi-unit neural activity!

Link to preprint: https://arxiv.org/abs/2102.10106

Link to code: https://github.com/nerdslab/myow

Abstract:   State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed “views” of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample’s latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.

Filed Under: Posts

ICIP 2021 – Multi-scale modeling of neural structure in X-ray imagery

September 22, 2021 by Eva Dyer

Check out our new paper at ICIP on multi-scale segmentation of brain structure from X-ray microCT image volumes!  [Check out our paper here!]

Abstract:   Methods for resolving the brain’s microstructure are rapidly improving, allowing us to image large brain volumes at high resolutions. As a result, the interrogation of samples spanning multiple diversified brain regions is becoming increasingly common. Understanding these samples often requires multiscale processing: segmentation of the detailed microstructure and large-scale modelling of the macrostructure. Current brain mapping algorithms often analyze data only at a single scale, and optimization for each scale occurs independently, potentially limiting the consistency, performance, and interpretability. In this work we introduce a deep learning framework for segmentation of brain structure at multiple scales. We leverage a modified U-Net architecture with a multi-task learning objective and unsupervised pre-training to simultaneously model both the micro and macro architecture of the brain. We successfully apply our methods to a heterogeneous, three-dimensional, X-ray micro-CT dataset spanning multiple regions in the mouse brain, and show that our approach consistently outperforms another multi-task architecture, and is competitive with strong single-task baselines at both scales.

Filed Under: Posts

UAI 2021 – Bayesian optimization for modular black-box systems with switching costs

September 22, 2021 by Eva Dyer

Henry presented his paper on Bayesian Optimization at the Conference on Uncertainty in AI (UAI)! [Check out the paper here!]

Abstract:  Most existing black-box optimization methods assume that all variables in the system being optimized have equal cost and can change freely at each iteration. However, in many real-world systems, inputs are passed through a sequence of different operations or modules, making variables in earlier stages of processing more costly to update. Such structure induces a dynamic cost from switching variables in the early parts of a data processing pipeline. In this work, we propose a new algorithm for switch-cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO). This method efficiently identifies the global optimum while minimizing cost through a passive change of variables in early modules. The method is theoretically grounded which achieves a vanishing regret regularized with switching cost. We apply LaMBO to multiple synthetic functions and a three-stage image segmentation pipeline used in a neuroimaging task, where we obtain promising improvements over existing cost-aware Bayesian optimization algorithms. Our results demonstrate that LaMBO is an effective strategy for black-box optimization capable of minimizing switching costs.

 

Filed Under: Posts

ICML 2021 – New paper on structured optimal transport to appear at ICML!

May 16, 2021 by Eva Dyer

We are excited to present our new approach for structured optimal transport at ICML this year! For more details, checkout the preprint (https://arxiv.org/abs/2012.11589) and our Github page for code (https://nerdslab.github.io/latentOT/).

Filed Under: Posts

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next Page »

Recent Posts

  • NeurIPS 2024: Revealing connections between contrastive learning and optimal transport January 1, 2025
  • ICML 2024: Unveiling class disparities with spectral imbalance July 9, 2024
  • ICLR 2024: New work on data-adaptive position embeddings for timeseries transformers June 3, 2024
  • Check out this new visualization tool for behavior modeling! May 9, 2024
  • New paper on the theory of data augmentation in JMLR! April 8, 2024
  • New paper on data-adaptive latent augmentation to appear at WACV! January 6, 2024
IMG_2521
  • About
  • Research
  • People
  • News
  • Contact Us

Copyright © 2025 · Minimum Pro on Genesis Framework · WordPress · Log in