Bahareh Tolooshams
Bahareh Tolooshams

Assistant Professor,
ECE, University of Alberta
Fellow, Alberta Machine Intelligence Institute (Amii)
Member, Neuroscience and Mental Health Institute (NMHI)

[email] [scholar] [github]

Our knowledge can only be finite, while our ignorance must necessarily be infinite.
- Karl Popper


I am attending ICML 2025. Reach out if interested in joining my research group. We are presenting posters at two workshops: Methods and Opportunities at Small Scale (MOSS) and Building Physically Plausible World Models.

I am an Assistant Professor at the University of Alberta and an Alberta Machine Intelligence Institute (Amii) Fellow. Amii is a world-leading AI institute. I am also a member of the Neuroscience and Mental Health Institute (NMHI). I received my PhD in May 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. Before joining the University of Alberta, I was a postdoctoral researcher and held the Swartz Foundation Fellowship Theoretical Neuroscience for two years at AI for Science Lab at the California Institute of Technology (Caltech). During my PhD, I also worked at Amazon AI and Microsoft as a Research Intern. I have BASc degree with distinction from the Department of Electrical and Computer Engineering at the University of Waterloo.

Mentorship and community building: During my time at Harvard University, I actively mentored Harvard College students through the Women in STEM Mentorship program. I was also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Research

Our research broadly covers machine learning, representation learning, generative models, inverse problems, interpretability, computational neuroscience, probabilistic modeling, and optimization. Our research leverages inverse problem as a framework for devising efficient, interpretable, and generalizable deep learning methods across science and engineering. The vision is inspired by probabilistic modelling in signal processing and by the hypothesis that the brain, as an efficient and robust intelligence, is an inference machine solving inverse problems to perceive the world.

What are inverse problems? They refer to the process of estimating a latent representation (cause) that explains the data observations (effect) in a physical system via a likelihood model. Inverse problems are ill-posed, meaning that the sole observations are inadequate, and additional priors are required for successful recovery. Understanding of how biological networks leverage and combine the prior and likelihood plays a crucial role in advancing artificial intelligent systems to solve inverse problems.

Representation learning and interpretability: Our research bridges between inverse problems and representation learning, and intends to address three fundamental questions, i.e., "what to learn" as representations from data, "how to learn" meaningful representations, and "how to use" representations to solve inverse problems. One useful interpretable form of structure you can think of for a representation is sparsity. Example publication:

  • From flat to hierarchical: Extracting sparse representations with matching pursuit
    V. Costa*, T. Fel*, E. S. Lubana*, B. Tolooshams, and D. Ba
    Submitted to NeurIPS 2025 [paper]

Generative models for inverse problems: Diffusion models represent the state-of-the-art for solving inverse problems. This is achieved by guiding the sampling process with measurement likelihood information. Our lab addresses existing challenges related to the intractability of the likelihood for solving inverse problems. Outcome is improved performance and robustness. Example publications:

  • EquiReg: Equivariance regularized diffusion for inverse problems
    B. Tolooshams*, A. Chandrashekar*, R. Zirvi*, A. Mammadov, J. Yao, C. Wang, and A. Anandkumar
    Submitted to NeurIPS 2025 [paper]
  • Diffusion state-guided projected gradient for inverse problems
    R. Zirvi*, B. Tolooshams*, and A. Anandkumar
    ICLR 2025 [paper]

NeuroAI: Our group also works on developing machine learning methods for neuroscience; particular focus is on representation and operator learning. Example publications:

  • Interpretable deep learning for deconvolutional analysis of neural signals
    B. Tolooshams*, S. Matias*, H. Wu, S. Temereanca, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
    Neuron 2025 [paper]
  • VARS-fUSI: Variable sampling for fast and efficient functional ultrasound imaging using neural operators
    B. Tolooshams, L. Lin, T. Callier, J. Wang, S. Pal, A. Chandrashekar, C. Rabut, Z. Li, C. Blagden, S. Norman, K. Azizzadenesheli, C. Liu, M. G. Shapiro, R. A. Andersen, and A. Anandkumar
    Submitted to Nature Communications [paper]
  • NOBLE - Neural operator with biologically-informed latent embeddings to capture experimental variability in biological neuron models
    L. Ghafourpour, V. Duruisseaux*, B. Tolooshams*, P. H. Wong, C. A. Anastassiou, and A. Anandkumar
    Submitted to NeurIPS 2025 [paper]

News
  • 06/2025: I have joined the University of Alberta as an Assistant Professor.
  • 03/2025: I have received Tianqiao and Chrissy Chen Brain-Machine Interface Grant Award at Caltech.
  • 07/2024: I have co-initiated and co-lead a NeurReps Global Speaker Series.
  • 04/2024: I am part of the 2024 NeurReps at NeurIPS workshop organizing team.
  • 01/2024: I co-led (with Geeling Chau) the Caltech Neuro+ML journal club during my time at Caltech.
  • 10/2023: I have won a Rising Stars Award in Conference on Parsimony and Learning.
  • 10/2023: I am named as a Rising Star in UChicago Data Science.
  • 06/2023: I have received Swartz Foundation Fellowship for Postdoctoral Research in Theoretical Neuroscience.
  • 06/2023: I have joined Anima AI + Science Lab at Caltech for a postdoc.
  • 05/2023: I received my PhD degree from Harvard University.
  • 02/2022: I joined InTouch, a peer-to-peer support network to build community and provide support for graduate students.
  • 05/2021: I joined the Applied Sciences Group at Microsoft as a Research Intern.
  • 07/2019: I received AWS Machine Learning Research Awards (MLRA).
  • 04/2019: I joined Amazon AI as ML/DSP Research Intern.
  • 04/2019: I received QBio Student Fellowship.
  • 08/2018: I received QBio Student Award Competition Fellowship.
  • 09/2017: I started my graduate studies at CRISP Group at Harvard University.
  • 06/2017: I received my B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo.