Michael was recently selected to receive a Presidential Undergraduate Research Award (PURA) to support his work in building models of human decision making. Way to go Michael!
Mine your own view: Self-supervised learning through across-sample prediction
Check out our new preprint where we introduce a new method for self-supervised learning and show its promise in building representations of multi-unit neural activity!
Link to preprint: https://arxiv.org/abs/2102.10106
Link to code: https://github.com/nerdslab/myow
Abstract: State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed “views” of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample’s latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.
ICIP 2021 – Multi-scale modeling of neural structure in X-ray imagery
Check out our new paper at ICIP on multi-scale segmentation of brain structure from X-ray microCT image volumes! [Check out our paper here!]
Abstract: Methods for resolving the brain’s microstructure are rapidly improving, allowing us to image large brain volumes at high resolutions. As a result, the interrogation of samples spanning multiple diversified brain regions is becoming increasingly common. Understanding these samples often requires multiscale processing: segmentation of the detailed microstructure and large-scale modelling of the macrostructure. Current brain mapping algorithms often analyze data only at a single scale, and optimization for each scale occurs independently, potentially limiting the consistency, performance, and interpretability. In this work we introduce a deep learning framework for segmentation of brain structure at multiple scales. We leverage a modified U-Net architecture with a multi-task learning objective and unsupervised pre-training to simultaneously model both the micro and macro architecture of the brain. We successfully apply our methods to a heterogeneous, three-dimensional, X-ray micro-CT dataset spanning multiple regions in the mouse brain, and show that our approach consistently outperforms another multi-task architecture, and is competitive with strong single-task baselines at both scales.
UAI 2021 – Bayesian optimization for modular black-box systems with switching costs
Henry presented his paper on Bayesian Optimization at the Conference on Uncertainty in AI (UAI)! [Check out the paper here!]
Abstract: Most existing black-box optimization methods assume that all variables in the system being optimized have equal cost and can change freely at each iteration. However, in many real-world systems, inputs are passed through a sequence of different operations or modules, making variables in earlier stages of processing more costly to update. Such structure induces a dynamic cost from switching variables in the early parts of a data processing pipeline. In this work, we propose a new algorithm for switch-cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO). This method efficiently identifies the global optimum while minimizing cost through a passive change of variables in early modules. The method is theoretically grounded which achieves a vanishing regret regularized with switching cost. We apply LaMBO to multiple synthetic functions and a three-stage image segmentation pipeline used in a neuroimaging task, where we obtain promising improvements over existing cost-aware Bayesian optimization algorithms. Our results demonstrate that LaMBO is an effective strategy for black-box optimization capable of minimizing switching costs.
ICML 2021 – New paper on structured optimal transport to appear at ICML!
We are excited to present our new approach for structured optimal transport at ICML this year! For more details, checkout the preprint (https://arxiv.org/abs/2012.11589) and our Github page for code (https://nerdslab.github.io/latentOT/).
The lab wins its first R01 from the NIH!!
The lab won its first R01 from the NIH! This project is sponsored by the NIH BRAIN Initiative’s Theory, Models, and Methods (TMM) Program. We look forward to doing rockin’ science with our collaborators in the Hengen lab under this award!