Bahareh Tolooshams
Bahareh Tolooshams

Postdoctoral Researcher,
CMS Department,
Caltech


I am currently a postdoc at AI for Science Lab at the California Institute of Technology. I received my PhD in May 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. I was advised by Demba Ba during my PhD studies at Harvard University. My doctoral dissertation is on Deep Learning for Inverse Problems in Engineering and Science. Moreover, during my PhD, I worked at Amazon AI and Microsoft as a Research Intern. I obtained my BASc with distinction in 2017 from the Department of Electrical and Computer Engineering at the University of Waterloo.

Mentorship and community building: During my time at Harvard University, I actively mentored Harvard College students through the Women in STEM Mentorship program. I was also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Journal club: Geeling Chau and I are reviving the Caltech Neuro+ML journal club in 2024. Come and read papers with us.

Latest news:

  • 10/2023: I have won a Rising Stars Award in Conference on Parsimony and Learning.
  • 10/2023: I am named as a Rising Star in UChicago Data Science.
  • 06/2023: I have received Swartz Foundation Fellowship for Postdoctoral Research in Theoretical Neuroscience.

Research

My research advances artificial intelligence for science and engineering with a focus on computational neuroscience and biomedical sciences. I combine the statistical superiority of generative models with the computational efficiency of discriminative models to learn generalizable, interpretable, and robust representations. I am currently working on two main projects.

  • Spatiotemporal generative models of the brain: I develop variants of diffusion generative models to study how the brain solves inverse problems, e.g., removing degradations from images.
  • Functional ultrasound imaging for real-time brain-computer interfaces (BCIs): I develop deep neural operators to enable ultrasound brain imaging from minimal frames and real-time behavioural decoding. This project can potentially transform behavioural studies in neuroscience and medical applications.

My PhD research used statistical optimization models to design efficient and interpretable deep learning. I focused on the sparse coding/dictionary learning generative model. My PhD research is related to a class of machine learning algorithms referred to as unrolled learning in the literature.

  • Deep learning theory and interpretability: I took a model-based optimization approach to improve the theoretical rigor of deep learning. This approach allows the designing of deep-learning-based provable algorithms. Moreover, it offers interpretability and mathematical reasons for representations of a new test example and extracts similar/dissimilar data from the training set. Moreover, I have shown that the backpropagation, compared to analytic gradients, accelerates learning and enhances model recovery.
  • Deep learning for engineering: Inverse problems are conventionally solved by slow and unscalable optimization techniques. While deep learning can be applied to the problem at scale, their generalization is questionable as inverse problems often suffer from data scarcity. I addressed this challenge by designing model-based deep networks that exhibit a superior performance in solving inverse problems with accelerated inference. Applications include computational neuroscience, radar sensing, and image denoising.
  • Representations for computational neuroscience: Deep learning can capture neural population dynamics in computational neuroscience. Its black-box nature, however, limits the unsupervised identification of factors driving neural activity. I addressed this consequential drawback using interpretable sparse representation learning. The approach is versatile to deconvolve neural signals across various brain areas and data modalities. The outcome enables deep learning applications to scientific questions in computational neuroscience.

During my two fantastic research internship experiences, I worked on speech enhancement. Here, I proposed channel-attention to improve multichannel speech enhancement. In another publication, a joint work with Kazuhito Koishida from Microsoft, I proposed a training framework for perceptual enhancement of stereo speech.