Bahareh Tolooshams
Bahareh Tolooshams

Postdoctoral Researcher,
CMS Department,
Caltech

Our knowledge can only be finite, while our ignorance must necessarily be infinite.
- Karl Popper


I am currently a postdoc at AI for Science Lab at the California Institute of Technology. I received my PhD in May 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. I was advised by Demba Ba during my PhD studies at Harvard University. My doctoral dissertation is on Deep Learning for Inverse Problems in Engineering and Science. Moreover, during my PhD, I worked at Amazon AI and Microsoft as a Research Intern. I obtained my BASc with distinction in 2017 from the Department of Electrical and Computer Engineering at the University of Waterloo.

Mentorship and community building: During my time at Harvard University, I actively mentored Harvard College students through the Women in STEM Mentorship program. I was also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Journal club: Geeling Chau and I co-lead the Caltech Neuro+ML journal club. Come and read papers with us.

Latest news:

  • 01/2025: Paper accepted to ICLR on solving inverse problems with generative diffusion models.
  • 09/2024: Paper accepted to Neuron on interpretable representations for analysis of neural data.
  • 09/2024: I will present a poster at COSYNE.
  • 07/2024: I have co-initiated and co-lead a NeurReps Global Speaker Series.
  • 04/2024: I am part of the 2024 NeurReps workshop organizing team.
  • 10/2023: I have won a Rising Stars Award in Conference on Parsimony and Learning.
  • 10/2023: I am named as a Rising Star in UChicago Data Science.
  • 06/2023: I have received Swartz Foundation Fellowship for Postdoctoral Research in Theoretical Neuroscience.

Research

My research leverages inverse problem as a framework for devising efficient, interpretable, and generalizable deep learning methods across science and engineering. The vision is inspired by probabilistic modelling in signal processing and by the hypothesis that the brain, as an efficient and robust intelligence, is an inference machine solving inverse problems to perceive the world. Specifically, my research bridges between inverse problems and representation learning, and intends to address three fundamental questions, i.e., "what to learn" as representations from data, "how to learn" meaningful representations, and "how to use" representations to solve inverse problems.

Interested about knowing what inverse problems are? They refer to the process of estimating a latent representation (cause) that explains the data observations (effect) in a physical system via a likelihood model. Inverse problems are ill-posed, meaning that the sole observations are inadequate, and additional priors are required for successful recovery. Understanding of how biological networks leverage and combine the prior and likelihood plays a crucial role in advancing artificial intelligent systems to solve inverse problems.