Methods for quantifying neuroanatomy

The lab is actively developing computer vision, ML, and deep learning methods for modeling neural architecture from large-scale neuroimaging datasets, including methods for: learning cytoarchitecture (layers in cortical and retinal tissues), mapping and discovery of brain areas, and modeling high-resolution spatial maps of whole-brain connectivity.

Related publications:

  • Dyer et al., Quantifying mesoscale neuroanatomy using X-ray microtomography, , 2017  (Web, Paper)
  • T.J. LaGrow, M. Moore, J.A. Prasad, A. Webber , M.A. Davenport, E.L. Dyer, Cytoarchitecture and Layer Estimation in High-Resolution Neuroanatomical Images, bioarXiv, July 2018 (Preprint)
  • D. Rolnick, E.L. Dyer, Generative models and abstractions for large-scale neuroanatomy datasets, Current Opinion in Neurobiology, February 2018. (Paper, Current Opinion)
  • M. Dabagia, E.L. Dyer, Barycenters in the brain: An optimal transport approach for modeling connectivity, Optimal Transport in Machine Learning Workshop, NeurIPS, 2019.

Low-dimensional signal models

Unions of subspaces (UoS) are a generalization of single subspace models that approximate data points as living on multiple subspaces, rather than assuming a global low-dimensional model (as in PCA). Modeling data with mixtures of subspaces provides a more compact and simple representation of the data, and thus can lead to better partitioning (clustering) of the data and help in compression and denoising.

Related publications:

  • E.L. Dyer, A.C. Sankaranarayanan, and R.G. Baraniuk, Greedy feature selection for subspace clustering, The Journal of Machine Learning Research 14 (1), 2487-2517, September, 2013. (Paper)
  • E.L. Dyer, T.A. Goldstein, R. Patel, K.P. Körding, and R.G. Baraniuk, Sparse self-expressive decompositions for dimensionality reduction and clustering (Paper)
  • R.J. Patel, T.A. Goldstein, E.L. Dyer, A. Mirhoseini, and R.G. Baraniuk, Deterministic column sampling for low rank approximation: Nystrom vs. Incomplete Cholesky Decomposition, SIAM Data Mining (SDM) Conference, May 2016. (Paper, Code)

Distribution alignment and optimal transport for neural decoding

Advances in monitoring the activity of large populations of neurons has provided new insights into the collective dynamics of neurons. The lab is working on methods that learn and exploit low-dimensional structure in neural activity for decoding, classification, denoising, and deconvolution.

Related publications:

  • J. Lee, M. Dabagia, E.L. Dyer*, C. Rozell*, Hierarchical Optimal Transport for Multimodal Distribution Alignment, Neural Information Processing Systems (NeurIPS), Dec 2019. (Preprint, Python Code)
  • E.L. Dyer, M. Azar, H.L. Fernandes, M. Perich, L.E. Miller, and K.P. Körding: A cryptography-based approach to brain decoding, Nature Biomedical Engineering, 2017. (Web, Paper)

Large-scale optimization

Optimization problems are ubiquitous in machine learning and neuroscience. The lab works on a few different topics in the areas of non-convex optimization and distributed machine learning.

Related publications:

  • A. Mirhoseini, E.L. Dyer, E. Songhori, R.G. Baraniuk, and F. Koushanfar, RankMap: A platform-aware framework for distributed learning from dense datasets, IEEE Trans. on Neural Networks and Learning Systems, 2017. (Paper, Code)
  • M Gheshlaghi Azar, E.L. Dyer, Konrad Kording, Convex Relaxation Regression (CoRR): Black-box optimization of a smooth function by learning its convex envelope, Proc. of the Conference on Uncertainity in Artificial Intelligence, 2016. (Paper)

A VISUALIZATION OF RECENT PUBLICATIONS FROM THE NerDS lab