Author: Eva Dyer (Page 1 of 3)

The lab presents two papers at NeurIPS 2022!

The lab had two papers accepted at NeurIPS this year! We are excited to attend the meeting in New Orleans!

__
EIT – In this work, we introduce a new space-time separable transformer architecture for building representations of dynamics called Embedded Interaction Transformer (EIT). When applied to the activity from populations of neurons where size and ordering may not be consistent across datasets, we show how EIT can be used to unlock across-animal transfer for neural decoders!

__
MTNeuro – In this work, we introduce a new multi-task benchmark for evaluating models of brain structure across multiple spatial scales and at different levels of abstraction. We provide new baseline models and ways to extract representations from 3D high-resolution (~1 um) neuroimaging data spanning many regions of interest with diverse anatomy in the mouse brain.

The lab is awarded a NSF CAREER!

The NerDS lab was awarded a NSF CAREER to further accelerate their study of neural circuits and representation learning from large-scale neural recordings!

Check out more info here:

https://www.bme.gatech.edu/bme/news/eva-dyer-using-nsf-career-award-make-neuron-behavior-connection

Eva is selected as a CIFAR Global Scholar!

Eva Dyer’s work at the intersection of machine learning and neuroscience is getting a boost from the Canadian research organization CIFAR. She is one of 18 early career researchers selected for the new group of CIFAR Azrieli Global Scholars, a two-year program designed to accelerate the participants’ careers with funding, mentoring, and skills development.

More info here:
https://bme.gatech.edu/bme/news/cifar-selects-eva-dyer-global-scholars-research-and-leadership-development-program

Now hiring postdocs!

The NerDS lab is looking for postdocs to work at the intersection of ML and neuroscience! Please reach out to Dr. Dyer with your CV and research interests.

CVPR 2022 – Multi-agent behavior workshop and challenge (Oral!)

Mehdi will present his work on building embeddings of animal behavior at the CVPR Workshop on Multi-agent Behavior in New Orleans!

[*] Link to Paper on arXiv

MIDL 2022 – Building representations of different brain areas through hierarchical point cloud networks

Joy and Ran will present a short paper at Medical Imaging and Deep Learning (MIDL) this year! They develop a hierarchical point cloud-based learning framework that builds representations of brain microstructure captured in volumetric micron-scale X-ray imaging datasets.

Check out the paper here [*]

NeurIPS 2021 – Myow selected for an Oral presentation at SSL Workshop!

Mehdi presented his work on using self-supervised learning for neural activity at the Neurips Workshop on Self-Supervised Learning!

More information!

[*] Link to the workshop paper

[*] Link to our full length arXiv version which describes the application of our approach in both computer vision and in neuroscience.

[*] Link to code for neural data sets!
[*] Link to code for using Myow for images

NeurIPS 2021 – Swap-VAE selected for an Oral !

Our new paper on building unsupervised representations of neural activity was accepted for an oral presentation at NeurIPS!

Check out the paper here: https://www.biorxiv.org/content/10.1101/2021.07.21.453285v1.full

Check out the code here:
https://github.com/nerdslab/SwapVAE

__

Abstract:  Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Michael is awarded a PURA from Georgia Tech!

Michael was recently selected to receive a Presidential Undergraduate Research Award (PURA) to support his work in building models of human decision making. Way to go Michael!

Mine your own view: Self-supervised learning through across-sample prediction

Check out our new preprint where we introduce a new method for self-supervised learning and show its promise in building representations of multi-unit neural activity!

Link to preprint: https://arxiv.org/abs/2102.10106

Link to code: https://github.com/nerdslab/myow

Abstract:   State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed “views” of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample’s latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén