Swap-VAE accepted for an Oral at NeurIPS 2021!

Our new paper on building unsupervised representations of neural activity was accepted for an oral presentation at NeurIPS!

Check out the paper here: https://www.biorxiv.org/content/10.1101/2021.07.21.453285v1.full

Check out the code here:
https://github.com/nerdslab/SwapVAE

__

Abstract:  Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Mine your own view: Self-supervised learning through across-sample prediction

Check out our new preprint where we introduce a new method for self-supervised learning and show its promise in building representations of multi-unit neural activity!

Link to preprint: https://arxiv.org/abs/2102.10106

Link to code: https://github.com/nerdslab/myow

Abstract:   State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed “views” of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample’s latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.

ICIP 2021 – Multi-scale modeling of neural structure in X-ray imagery

Check out our new paper at ICIP on multi-scale segmentation of brain structure from X-ray microCT image volumes!  [Check out our paper here!]

Abstract:   Methods for resolving the brain’s microstructure are rapidly improving, allowing us to image large brain volumes at high resolutions. As a result, the interrogation of samples spanning multiple diversified brain regions is becoming increasingly common. Understanding these samples often requires multiscale processing: segmentation of the detailed microstructure and large-scale modelling of the macrostructure. Current brain mapping algorithms often analyze data only at a single scale, and optimization for each scale occurs independently, potentially limiting the consistency, performance, and interpretability. In this work we introduce a deep learning framework for segmentation of brain structure at multiple scales. We leverage a modified U-Net architecture with a multi-task learning objective and unsupervised pre-training to simultaneously model both the micro and macro architecture of the brain. We successfully apply our methods to a heterogeneous, three-dimensional, X-ray micro-CT dataset spanning multiple regions in the mouse brain, and show that our approach consistently outperforms another multi-task architecture, and is competitive with strong single-task baselines at both scales.

UAI 2021 – Bayesian optimization for modular black-box systems with switching costs

Henry presented his paper on Bayesian Optimization at the Conference on Uncertainty in AI (UAI)! [Check out the paper here!]

Abstract:  Most existing black-box optimization methods assume that all variables in the system being optimized have equal cost and can change freely at each iteration. However, in many real-world systems, inputs are passed through a sequence of different operations or modules, making variables in earlier stages of processing more costly to update. Such structure induces a dynamic cost from switching variables in the early parts of a data processing pipeline. In this work, we propose a new algorithm for switch-cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO). This method efficiently identifies the global optimum while minimizing cost through a passive change of variables in early modules. The method is theoretically grounded which achieves a vanishing regret regularized with switching cost. We apply LaMBO to multiple synthetic functions and a three-stage image segmentation pipeline used in a neuroimaging task, where we obtain promising improvements over existing cost-aware Bayesian optimization algorithms. Our results demonstrate that LaMBO is an effective strategy for black-box optimization capable of minimizing switching costs.

 

New paper on structured optimal transport to appear at ICML!

We are excited to present our new approach for structured optimal transport at ICML this year! For more details, checkout the preprint (https://arxiv.org/abs/2012.11589) and our Github page for code (https://nerdslab.github.io/latentOT/).

The lab wins its first R01!!

The lab won its first R01 from the NIH! This project is sponsored by the NIH BRAIN Initiative’s Theory, Models, and Methods (TMM) Program. We look forward to doing rockin’ science with our collaborators in the Hengen lab under this award!

The lab wins a McKnight Tech Award!

The lab was selected to receive a McKnight Foundation Technological Innovations in Neuroscience Award to fund our work in neural distribution alignment!! (Article)

A Deep Feature Learning Approach for Mapping the Brain’s Microarchitecture and Organization

Aish’s paper on deep representation learning for neuroanatomy is submitted!

Check out our preprint on bioRxiv (Link) and a short version of the paper that appeared in a recent ICML Workshop on Scientific Discovery (Link)!

Max is awarded a NSF Graduate Research Fellowship!

Congratulations to Max Dabagia for being awarded a NSF Graduate Research Fellowship! Max will be starting his PhD in the ML-CS program in Fall. Way to go Max!!

NerDS Lab @ NeurIPS

At the main meeting, John presented new results on using optimal transport for distribution alignment at NeurIPS. Check out the paper and a website where we discuss applications of the method to neural recordings.

Following the main meeting, Max presented his work on using Wasserstein barycenter regression for connectomics at the Optimal Transport for Machine Learning (OTML) Workshop. The workshop was great, we learned a lot!

Page 1 of 3

Powered by WordPress & Theme by Anders Norén