Bahareh Tolooshams
Bahareh Tolooshams

Postdoctoral Researcher,
CMS Department,
Caltech

Our knowledge can only be finite, while our ignorance must necessarily be infinite.
- Karl Popper

I am joining the University of Alberta as an Assistant Professor, starting June 2025.


I received my PhD in May 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. I am joining the University of Alberta as an Assistant Professor in June 2025. Before joining the University of Alberta, I was a postdoctoral researcher and held the Swartz Foundation Fellowship Theoretical Neuroscience for two years at AI for Science Lab at the California Institute of Technology (Caltech). During my PhD, I also worked at Amazon AI and Microsoft as a Research Intern. I have BASc degree with distinction from the Department of Electrical and Computer Engineering at the University of Waterloo.

My doctoral dissertation: Deep Learning for Inverse Problems in Engineering and Science.

Mentorship and community building: During my time at Harvard University, I actively mentored Harvard College students through the Women in STEM Mentorship program. I was also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Research

My research broadly covers machine learning, representation learning, generative models, inverse problems, interpretability, computational neuroscience, and optimization.

My research leverages inverse problem as a framework for devising efficient, interpretable, and generalizable deep learning methods across science and engineering. The vision is inspired by probabilistic modelling in signal processing and by the hypothesis that the brain, as an efficient and robust intelligence, is an inference machine solving inverse problems to perceive the world. Specifically, my research bridges between inverse problems and representation learning, and intends to address three fundamental questions, i.e., "what to learn" as representations from data, "how to learn" meaningful representations, and "how to use" representations to solve inverse problems. One useful interpretable form of structure you can think of for a representation is sparsity.

Interested about knowing what inverse problems are? They refer to the process of estimating a latent representation (cause) that explains the data observations (effect) in a physical system via a likelihood model. Inverse problems are ill-posed, meaning that the sole observations are inadequate, and additional priors are required for successful recovery. Understanding of how biological networks leverage and combine the prior and likelihood plays a crucial role in advancing artificial intelligent systems to solve inverse problems.