Eva Dyer’s work at the intersection of machine learning and neuroscience is getting a boost from the Canadian research organization CIFAR. She is one of 18 early career researchers selected for the new group of CIFAR Azrieli Global Scholars, a two-year program designed to accelerate the participants’ careers with funding, mentoring, and skills development.
The NerDS lab is looking for postdocs to work at the intersection of ML and neuroscience! Please reach out to Dr. Dyer with your CV and research interests.
Joy and Ran will present a short paper at Medical Imaging and Deep Learning (MIDL) this year! They develop a hierarchical point cloud-based learning framework that builds representations of brain microstructure captured in volumetric micron-scale X-ray imaging datasets.
Check out the paper here [*]
Mehdi presented his work on using self-supervised learning for neural activity at the Neurips Workshop on Self-Supervised Learning!
[*] Link to the workshop paper
[*] Link to our full length arXiv version which describes the application of our approach in both computer vision and in neuroscience.
Our new paper on building unsupervised representations of neural activity was accepted for an oral presentation at NeurIPS!
Check out the paper here: https://www.biorxiv.org/content/10.1101/2021.07.21.453285v1.full
Check out the code here:
Abstract: Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.