NeuBahar Lab is based in the Electrical and Computer Engineering Deparment at the University of Alberta. NeuBahar is pronounced noʊ-bæˈhɑːr, meaning "new spring"- a poetic reference to renewal, growth, and the arrival of spring.
Our research broadly covers machine learning, representation learning, generative models, inverse problems, interpretability, computational neuroscience, probabilistic modeling, and optimization. Our research leverages inverse problem as a framework for devising efficient, interpretable, and generalizable deep learning methods across science and engineering. The vision is inspired by probabilistic modelling in signal processing and by the hypothesis that the brain, as an efficient and robust intelligence, is an inference machine solving inverse problems to perceive the world.
What are inverse problems? They refer to the process of estimating a latent representation (cause) that explains the data observations (effect) in a physical system via a likelihood model. Inverse problems are ill-posed, meaning that the sole observations are inadequate, and additional priors are required for successful recovery. Understanding of how biological networks leverage and combine the prior and likelihood plays a crucial role in advancing artificial intelligent systems to solve inverse problems.
Representation learning and interpretability: Our research bridges between inverse problems and representation learning, and intends to address three fundamental questions, i.e., "what to learn" as representations from data, "how to learn" meaningful representations, and "how to use" representations to solve inverse problems. One useful interpretable form of structure you can think of for a representation is sparsity. Example publication:
-
From flat to hierarchical: Extracting sparse representations with matching pursuit V. Costa*, T. Fel*, E. S. Lubana*, B. Tolooshams†, and D. Ba† Submitted to NeurIPS 2025 [paper]
Generative models for inverse problems: Diffusion models represent the state-of-the-art for solving inverse problems. This is achieved by guiding the sampling process with measurement likelihood information. Our lab addresses existing challenges related to the intractability of the likelihood for solving inverse problems. Outcome is improved performance and robustness. Example publications:
-
EquiReg: Equivariance regularized diffusion for inverse problems B. Tolooshams*, A. Chandrashekar*, R. Zirvi*, A. Mammadov, J. Yao, C. Wang, and A. Anandkumar Submitted to NeurIPS 2025 [paper] -
Diffusion state-guided projected gradient for inverse problems R. Zirvi*, B. Tolooshams*, and A. Anandkumar ICLR 2025 [paper]
NeuroAI: Our group also works on developing machine learning methods for neuroscience; particular focus is on representation and operator learning. Example publications:
-
Interpretable deep learning for deconvolutional analysis of neural signals B. Tolooshams*, S. Matias*, H. Wu, S. Temereanca, N. Uchida, V. N. Murthy, P. Masset†, and D. Ba† Neuron 2025 [paper] -
VARS-fUSI: Variable sampling for fast and efficient functional ultrasound imaging using neural operators B. Tolooshams, L. Lin, T. Callier, J. Wang, S. Pal, A. Chandrashekar, C. Rabut, Z. Li, C. Blagden, S. Norman, K. Azizzadenesheli, C. Liu, M. G. Shapiro, R. A. Andersen, and A. Anandkumar Submitted to Nature Communications [paper] -
NOBLE - Neural operator with biologically-informed latent embeddings to capture experimental variability in biological neuron models L. Ghafourpour, V. Duruisseaux*, B. Tolooshams*, P. H. Wong, C. A. Anastassiou, and A. Anandkumar Submitted to NeurIPS 2025 [paper]
-
coming soon!
-
coming soon!
-
Khoi Xuan Nguyen - July 2025 to present
-
Luca Ghafourpour (visiting master student from ETH) - Jan. 2025 to present -
Ailsa Shen (visiting undergraduate student from Caltech) - Feb. 2025 to present -
Siddhesh Salphale (visiting undergraduate student from IIT kharagpur) - Dec. 2024 to present -
Valérie Costa (visiting master student from EPFL) - Oct. 2024 to present -
Aditi Chandrashekar (visiting undergraduate student from Caltech) - Sept. 2023 to May 2025 -
Rayhan Zirvi (visiting undergraduate student from Caltech) - Feb. 2024 to May 2025 -
Sanvi Pal (visiting undergraduate student from Caltech) - June to Dec. 2024 -
Bobby Wang (visiting undergraduate student from Caltech) - Feb. to Dec. 2024 -
Freya Shah (visiting undergraduate student from Ahmedabad University) - June to Aug. 2024 -
Austin Wang (visiting undergraduate student from Caltech) - July to Dec. 2023
-
Max Guo (visiting undergraduate student from Harvard) - Jan. to Dec. 2020 -
Hyeon-Jae Seo (visiting undergraduate student from Harvard) - Apr. to Nov. 2019 -
Thomas Chang (visiting undergraduate student from Harvard) - Jan. to May 2019