Author: Eva Dyer (Page 1 of 2)

ICIP 2021 – Multi-scale modeling of neural structure in X-ray imagery

Check out our new paper at ICIP on multi-scale segmentation of brain structure from X-ray microCT image volumes!  [Check out our paper here!]

Abstract:   Methods for resolving the brain’s microstructure are rapidly improving, allowing us to image large brain volumes at high resolutions. As a result, the interrogation of samples spanning multiple diversified brain regions is becoming increasingly common. Understanding these samples often requires multiscale processing: segmentation of the detailed microstructure and large-scale modelling of the macrostructure. Current brain mapping algorithms often analyze data only at a single scale, and optimization for each scale occurs independently, potentially limiting the consistency, performance, and interpretability. In this work we introduce a deep learning framework for segmentation of brain structure at multiple scales. We leverage a modified U-Net architecture with a multi-task learning objective and unsupervised pre-training to simultaneously model both the micro and macro architecture of the brain. We successfully apply our methods to a heterogeneous, three-dimensional, X-ray micro-CT dataset spanning multiple regions in the mouse brain, and show that our approach consistently outperforms another multi-task architecture, and is competitive with strong single-task baselines at both scales.

UAI 2021 – Bayesian optimization for modular black-box systems with switching costs

Henry presented his paper on Bayesian Optimization at the Conference on Uncertainty in AI (UAI)! [Check out the paper here!]

Abstract:  Most existing black-box optimization methods assume that all variables in the system being optimized have equal cost and can change freely at each iteration. However, in many real-world systems, inputs are passed through a sequence of different operations or modules, making variables in earlier stages of processing more costly to update. Such structure induces a dynamic cost from switching variables in the early parts of a data processing pipeline. In this work, we propose a new algorithm for switch-cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO). This method efficiently identifies the global optimum while minimizing cost through a passive change of variables in early modules. The method is theoretically grounded which achieves a vanishing regret regularized with switching cost. We apply LaMBO to multiple synthetic functions and a three-stage image segmentation pipeline used in a neuroimaging task, where we obtain promising improvements over existing cost-aware Bayesian optimization algorithms. Our results demonstrate that LaMBO is an effective strategy for black-box optimization capable of minimizing switching costs.

 

New paper on structured optimal transport to appear at ICML!

We are excited to present our new approach for structured optimal transport at ICML this year! For more details, checkout the preprint (https://arxiv.org/abs/2012.11589) and our Github page for code (https://nerdslab.github.io/latentOT/).

The lab wins its first R01!!

The lab won its first R01 from the NIH! This project is sponsored by the NIH BRAIN Initiative’s Theory, Models, and Methods (TMM) Program. We look forward to doing rockin’ science with our collaborators in the Hengen lab under this award!

The lab wins a McKnight Tech Award!

The lab was selected to receive a McKnight Foundation Technological Innovations in Neuroscience Award to fund our work in neural distribution alignment!! (Article)

A Deep Feature Learning Approach for Mapping the Brain’s Microarchitecture and Organization

Aish’s paper on deep representation learning for neuroanatomy is submitted!

Check out our preprint on bioRxiv (Link) and a short version of the paper that appeared in a recent ICML Workshop on Scientific Discovery (Link)!

Max is awarded a NSF Graduate Research Fellowship!

Congratulations to Max Dabagia for being awarded a NSF Graduate Research Fellowship! Max will be starting his PhD in the ML-CS program in Fall. Way to go Max!!

NerDS Lab @ NeurIPS

At the main meeting, John presented new results on using optimal transport for distribution alignment at NeurIPS. Check out the paper and a website where we discuss applications of the method to neural recordings.

Following the main meeting, Max presented his work on using Wasserstein barycenter regression for connectomics at the Optimal Transport for Machine Learning (OTML) Workshop. The workshop was great, we learned a lot!

Hands on Tech Summer Camp for High School Students

NerDS lab members developed an intro to image analysis and deep learning for high school students. We taught the module to two groups of high school students that participated in the HOT Days program organized by the ECE department at Georgia Tech.

Our python notebook and instructions on how to use Colaboratory are located on the lab’s github page. (Hands on Tech Github Repo)

Deep Learning for Microscopy Course @ MBL

Eva, Aish, and Joe had the opportunity to serve as instructors at the Deep Learning for Microscopy Workshop at the Marine Biological Laboratory (MBL). The course was organized by Jan Funke and Patrick LaRiviere and was funded by the National Center for Brain Mapping at the University of Chicago.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén