 03/2023: I gave a talk at the COSYNE main meeting.
 02/2023: I defended my PhD.
 01/2023: Submitted a paper to ICML 2023.
 03/2022: I have received the GSAS Student Council Spring Conference Grant at Harvard University.
 02/2022: I joined InTouch, a peertopeer support network to build community and provide support for graduate students.
 05/2021: I joined the Applied Sciences Group at Microsoft as a Research Intern.
 07/2019: I received AWS Machine Learning Research Awards (MLRA).
 04/2019: I joined Amazon AI as ML/DSP Research Intern.
 04/2019: I received QBio Student Fellowship.
 08/2018: I received QBio Student Award Competition Fellowship.
 07/2018: I received a travel grant for MLSP2018.
 09/2017: I started my graduate studies at CRISP Group at Harvard University.
 06/2017: I received my B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo.
I received B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo in 2017. I am a PhD Candidate, advised by Demba Ba, at Harvard University. I spent summer 2019 at Amazon AI as an Applied Scientist Intern and joined the Applied Sciences Group at Microsoft as a Research Intern for summer 2021. I am the recipient of the awards including the Machine Learning Research Award from AWS and the QBio Student Fellowship from Harvard University.
Mentorship and community building: I actively mentor Harvard College students through the Women in STEM Mentorship program. I am also a mentor at InTouch, a peertopeer support network to build community and provide support for graduate students.
My research interests are at the intersection of machine learning, optimization, statistical learning, and computational neuroscience. I offer an optimizationbased signal processing perspective to design and analyze deep learning architectures. Particularly, I am interested in representation learning and probabilistic generative models to develop deep interpretable, and efficient neural architectures. My research is related to a class of machine learning algorithms referred to as unrolled learning in the litearture.
My PhD research is divided into three tracks:
 Deep learning theory for model recovery and interpretability: I use a modelbased optimization approach to improve the theoretical rigor of deep learning. This approach allows the designing of deeplearningbased provable algorithms. Moreover, it offers interpretability and mathematical reasons for representations of a new test example and extracts similar/dissimilar data from the training set. For deep sparse codingbased networks, I have shown that backpropagation not only accelerates learning but also guarantees a better model recovery. Check out this work for the interpretability of deep unrolled networks as well as their theoretical properties for model recovery.
 Advancement of inverse problems in engineering: Inverse problems are conventionally solved by slow and unscalable optimization techniques. Deep learning exhibits a superior performance in solving inverse problems at scale. However, inverse problems often suffer from data scarcity. Hence, a question arises concerning the imposition of an inductive bias on deep architectures to enhance their generalization in unsupervised or datascarce inverse problems. My research addresses this question by designing structured deep networks that optimize a statistical model, demonstrating a superior performance in solving inverse problems with accelerated inference. In my recent work, I highlighted a superior generalization in datalimited regime in radar sensing. In my ICML publication, I have demonstrated the efficiency and stateoftheart competitive performance of my approach for Poisson image denoising.
 Computational neuroscience: Deep learning can capture neural population dynamics in computational neuroscience. The blackbox nature of deep learning, however, limits the unsupervised identification of factors driving neural activity. My research addresses this consequential drawback using interpretable learning; I associate the hidden network representations with a humanunderstandable model, linking them directly to stimuli and neural activity. My framework has deconvolved, for the first time, the singletrial activity of dopamine neurons into interpretable components in this abstract. Overall, this track aims to enable deeplearning applications to help answer scientific questions in computational neuroscience. I closely collaborate with Paul Masset from Murthy Lab.
During my two fantastic research internship experiences, I worked on speech enhancement. Here, I proposed channelattention to improve multichannel speech enhancement. In another publication, a joint work with Kazuhito Koishida from Microsoft, I proposed a training framework for perceptual enhancement of stereo speech.

Bayesian unrolling: scalable, inversefree maximum likelihood estimation of latent Gaussian models
A. Lin, B. Tolooshams, Y. Atchadé, and D. Ba
Submitted 2023 
Learning filterbased compressed blinddeconvolution
B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
Submitted to IEEE Transactions on Signal Processing 2022 [paper] 
Discriminative reconstruction via simultaneous dense and sparse coding
A. Tasissa, E. Theodosis, B. Tolooshams, and D. Ba
Submitted to Information and Inference: A Journal of the IMA 2022 [paper] [code] 
Stable and interpretable unrolled dictionary learning
B. Tolooshams and D. Ba
TMLR 2022 [paper] [code] 
A training framework for stereoaware speech enhancement using deep neural networks
B. Tolooshams and K. Koishida
IEEE ICASSP 2022 [paper] [slides] [poster] 
On the convergence of groupsparse autoencoders
E. Theodosis, B. Tolooshams*, P. Tankala*, A. Tasissa, and D. Ba
arXiv 2021 [paper] 
Gaussian process convolutional dictionary learning
A. H. Song, B. Tolooshams, and D. Ba
IEEE Signal Processing Letters 2021 [paper] 
Unfolding neural networks for compressive multichannel blind deconvolution
B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
IEEE ICASSP 2021 [paper] [slides] [poster] 
Deep residual autoencoders for expectation maximizationinspired dictionary learning
B. Tolooshams, S. Dey, and D. Ba
IEEE Transactions on Neural Networks and Learning Systems 2021 [paper] [code] 
Convolutional dictionary learning based autoencoders for natural exponentialfamily distributions
B. Tolooshams*, A. H. Song*, S. Temereanca, and D. Ba
ICML 2020 [paper] [code] [slides] 
Channelattention dense unet for multichannel speech enhancement
B. Tolooshams, R. Giri, A. H. Song, U. Isik, and A. Krishnaswamy
IEEE ICASSP 2020 [paper] 
Convolutional dictionary learning in hierarchical networks
J. Zazo, B. Tolooshams, and D. Ba
IEEE CAMSAP 2019 [paper] 
RandNet: deep learning with compressed measurements of images
T. Chang*, B. Tolooshams*, and D. Ba
IEEE MLSP 2019 [paper] [code] [poster] 
Scalable convolutional dictionary learning with constrained recurrent sparse autoencoders
B. Tolooshams, S. Dey, and D. Ba
IEEE MLSP 2018 [paper] [code] 
Robustness of frequency division technique for online myoelectric pattern recognition against contractionlevel variation
B. Tolooshams and N. Jiang
Frontiers in Bioengineering and Biotechnology 2017 [paper]

Interpretable deep learning for deconvolution of multiplexed neural signals
B. Tolooshams, S. Matias, H. Wu, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
COSYNE 2023 [paper] 
Interpretable unrolled dictionary learning networks
B. Tolooshams and D. Ba
DeepMath 2022 [paper] [slides] 
Unsupervised sparse deconvolutional learning of features driving neural activity
B. Tolooshams, H. Wu, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
COSYNE 2022 [paper] [poster] 
Unsupervised learning of a dictionary of neural impulse responses from spiking data
B. Tolooshams, H. Wu, P. Masset, V. N. Murthy, and D. Ba
COSYNE 2021 [paper] [poster] 
Convolutional dictionary learning of stimulus from spiking data
A. H. Song*, B. Tolooshams*, S. Temereanca, and D. Ba
COSYNE 2020 [paper] [poster]

Interpretable deep learning for deconvolution of multiplexed neural signals
Talk, Computational and Systems Neuroscience (COSYNE), 2023 
Deep learning for inverse problems in engineering and science
Talk, Healthy ML Group at MIT, 2023 
Deep learning for inverse problems in engineering and science
Talk, My PhD Defense at Harvard University, 2023 
Interpretable multimodal deconvolutional representation learning
Talk, MURI annual meetings at ARL, 2023 
Deep representation learning for computational neuroscience
Talk, DiCarlo Lab at MIT, 2022 
Interpretable unrolled dictionary learning networks
Talk, Conference on the Mathematical Theory of Deep Neural Networks (DeepMath), 2022 
Design interpretable and efficient neural architectures for science and engineering
Talk, Anima AI + Science Lab at Caltech, 2022 
Deconvolution of multiplexed neural signals using interpretable deep learning
Talk, Uchida Lab at Harvard University, 2022 
Deep unrolled learning using bilevel optimizations
Talk, Vector Institute, 2022 
Deep unrolling for inverse problems
Talk, Poggio Lab at MIT, 2021 
Unfolded neural networks for implicit acceleration of dictionary learning
Talk, Amirkabir Artificial Intelligence Student Summit (AAISS) at Amirkabir University of Technology, 2021 
Perceptual stereo speech enhancement
Presentation, Applied Sciences Group at Microsoft, 2021 
Modelbased deep learning
Tutorial, IEEE ICASSP Conference, 2021 
Introduction to deep learning for computational neuroscience
Workshop, Neurosur, 2021 [github] 
Dictionary learning based autoencoders for inverse problems
Talk, Decision Theory  APMTH 231 at Harvard University, 2021 
On the relationship between dictionary learning and sparse autoencoders
Talk, Computational and Applied Math Seminar at Tufts University, 2020 
On the relationship between dictionary learning and sparse autoencoders
Talk, Pierre E. Jacob's Group at Harvard University, 2020 
Multichannel endtoend neural architectures for speech enhancement
Presentation, Amazon AI  AWS, 2019 
Autoencoders for unsupervised source separation
Talk, Decision Theory  APMTH 231 at Harvard University, 2019 
Statespace models and deep deconvolutional networks
Talk, CRISP Group at Harvard University, 2018