Bahareh Tolooshams
Bahareh Tolooshams

Assistant Professor,
ECE, University of Alberta
Fellow, Alberta Machine Intelligence Institute (Amii)

[email] [scholar] [github]

Our knowledge can only be finite, while our ignorance must necessarily be infinite.
- Karl Popper


I am an Assistant Professor at the University of Alberta and an Amii Fellow. I received my PhD in May 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. Before joining the University of Alberta, I was a postdoctoral researcher and held the Swartz Foundation Fellowship Theoretical Neuroscience for two years at AI for Science Lab at the California Institute of Technology (Caltech). During my PhD, I also worked at Amazon AI and Microsoft as a Research Intern. I have BASc degree with distinction from the Department of Electrical and Computer Engineering at the University of Waterloo.

Mentorship and community building: During my time at Harvard University, I actively mentored Harvard College students through the Women in STEM Mentorship program. I was also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Research

Our research broadly covers machine learning, representation learning, generative models, inverse problems, interpretability, computational neuroscience, and optimization. Our research leverages inverse problem as a framework for devising efficient, interpretable, and generalizable deep learning methods across science and engineering. The vision is inspired by probabilistic modelling in signal processing and by the hypothesis that the brain, as an efficient and robust intelligence, is an inference machine solving inverse problems to perceive the world.

What are inverse problems? They refer to the process of estimating a latent representation (cause) that explains the data observations (effect) in a physical system via a likelihood model. Inverse problems are ill-posed, meaning that the sole observations are inadequate, and additional priors are required for successful recovery. Understanding of how biological networks leverage and combine the prior and likelihood plays a crucial role in advancing artificial intelligent systems to solve inverse problems.

Representation learning and interpretability: Our research bridges between inverse problems and representation learning, and intends to address three fundamental questions, i.e., "what to learn" as representations from data, "how to learn" meaningful representations, and "how to use" representations to solve inverse problems. One useful interpretable form of structure you can think of for a representation is sparsity. Example publication:

  • From flat to hierarchical: Extracting sparse representations with matching pursuit
    V. Costa*, T. Fel*, E. S. Lubana*, B. Tolooshams, and D. Ba
    Submitted to NeurIPS 2025
  • Generative models for inverse problems: Diffusion models represent the state-of-the-art for solving inverse problems. This is achieved by guiding the sampling process with measurement likelihood information. Our lab addresses existing challenges related to the intractability of the likelihood for solving inverse problems. Outcome is improved performance and robustness. Example publications:

  • EquiReg: Equivariance regularized diffusion for inverse problems
    B. Tolooshams*, A. Chandrashekar*, R. Zirvi*, A. Mammadov, J. Yao, C. Wang, and A. Anandkumar
    Submitted to NeurIPS 2025 [paper]

  • Diffusion state-guided projected gradient for inverse problems
    R. Zirvi*, B. Tolooshams*, and A. Anandkumar
    ICLR 2025 [paper]
  • NeuroAI: Our group also works on developing machine learning methods for neuroscience; particular focus is on representation and operator learning.

  • Interpretable deep learning for deconvolutional analysis of neural signals
    B. Tolooshams*, S. Matias*, H. Wu, S. Temereanca, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
    Neuron 2025 [paper]

  • VARS-fUSI: Variable sampling for fast and efficient functional ultrasound imaging using neural operators
    B. Tolooshams, L. Lin, T. Callier, J. Wang, S. Pal, A. Chandrashekar, C. Rabut, Z. Li, C. Blagden, S. Norman, K. Azizzadenesheli, C. Liu, M. G. Shapiro, R. A. Andersen, and A. Anandkumar
    Submitted to Nature Communications [paper]

  • NOBLE - Neural operator with biologically-informed latent embeddings to capture experimental variability in biological neuron models
    L. Ghafourpour, V. Duruisseaux*, B. Tolooshams*, P. H. Wong, C. A. Anastassiou, and A. Anandkumar
    Submitted to NeurIPS 2025 [coming soon]