 11/2022: I submitted an abstract to COSYNE.
 03/2022: I have received the GSAS Student Council Spring Conference Grant at Harvard University.
 02/2022: I joined InTouch, a peertopeer support network to build community and provide support for graduate students.
 05/2021: I joined the Applied Sciences Group at Microsoft as a Research Intern.
 07/2019: I received AWS Machine Learning Research Awards (MLRA).
 04/2019: I joined Amazon AI as ML/DSP Research Intern.
 04/2019: I received QBio Student Fellowship.
 08/2018: I received QBio Student Award Competition Fellowship.
 07/2018: I received a travel grant for MLSP2018.
 09/2017: I started my graduate studies at CRISP Group at Harvard University.
 06/2017: I received my B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo.
I received B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo in 2017. I am a PhD Candidate, advised by Demba Ba, at Harvard University. I spent summer 2019 at Amazon AI as an Applied Scientist Intern and joined the Applied Sciences Group at Microsoft as a Research Intern for summer 2021. I am the recipient of the awards including the Machine Learning Research Award from AWS and the QBio Student Fellowship from Harvard University.
Mentorship and community building: I actively mentor Harvard College students through the Women in STEM Mentorship program. I am also a mentor at InTouch, a peertopeer support network to build community and provide support for graduate students.
My research interests are at the intersection of machine learning, optimization, statistical learning, and computational neuroscience.
My PhD research is divided into three tracks:
 Interpretable deep learning: I am interested in a class of machine learning algorithms referred to as unrolled learning. This is to develop deep interpretable and efficient inference models based on iterative optimizations or generative models. For deep sparse coding unrolled networks, we can derive a mathematical relation between network weights, its predictions, and the training data. The relation helps to reason about representations of a new test example and extract similar/dissimilar data from the training set. Check out this work for the interpretability of deep unrolled networks as well as their theoretical properties for model recovery. Moreover, I have demonstrated the efficiency and stateoftheart competitive performance of such architectures in the supervised task of Poisson image denoising in my recent ICML publication.
 Computational neuroscience: I develop interpretable networks to learn features driving neural activities. The network is based on unrolled learning allowing its application to data limited regimes such as single neurons. We studied piriform cortex neurons in this abstract, and showcased a sparse deconvolutional method for the analysis of dopaminergic neurons in this abstract. For this track, I closely collaborate with Paul Masset from Murthy Lab.
During my two fantastic research internship experiences, I worked on speech enhancement. Here, I proposed channelattention to improve multichannel speech enhancement. In another publication, a joint work with Kazuhito Koishida from Microsoft, I proposed a training framework for perceptual enhancement of stereo speech.

Learning filterbased compressed blinddeconvolution
B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
Submitted to IEEE Transactions on Signal Processing 2022 [paper] 
Discriminative reconstruction via simultaneous dense and sparse coding
A. Tasissa, E. Theodosis, B. Tolooshams, and D. Ba
Submitted to Information and Inference: A Journal of the IMA 2022 [paper] [code] 
Stable and interpretable unrolled dictionary learning
B. Tolooshams and D. Ba
TMLR 2022 [paper] [code] 
A training framework for stereoaware speech enhancement using deep neural networks
B. Tolooshams and K. Koishida
IEEE ICASSP 2022 [paper] [slides] [poster] 
On the convergence of groupsparse autoencoders
E. Theodosis, B. Tolooshams*, P. Tankala*, A. Tasissa, and D. Ba
arXiv 2021 [paper] 
Gaussian process convolutional dictionary learning
A. H. Song, B. Tolooshams, and D. Ba
IEEE Signal Processing Letters 2021 [paper] 
Unfolding neural networks for compressive multichannel blind deconvolution
B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
IEEE ICASSP 2021 [paper] [slides] [poster] 
Deep residual autoencoders for expectation maximizationinspired dictionary learning
B. Tolooshams, S. Dey, and D. Ba
IEEE Transactions on Neural Networks and Learning Systems 2021 [paper] [code] 
Convolutional dictionary learning based autoencoders for natural exponentialfamily distributions
B. Tolooshams*, A. H. Song*, S. Temereanca, and D. Ba
ICML 2020 [paper] [code] [slides] 
Channelattention dense unet for multichannel speech enhancement
B. Tolooshams, R. Giri, A. H. Song, U. Isik, and A. Krishnaswamy
IEEE ICASSP 2020 [paper] 
Convolutional dictionary learning in hierarchical networks
J. Zazo, B. Tolooshams, and D. Ba
IEEE CAMSAP 2019 [paper] 
RandNet: deep learning with compressed measurements of images
T. Chang*, B. Tolooshams*, and D. Ba
IEEE MLSP 2019 [paper] [code] [poster] 
Scalable convolutional dictionary learning with constrained recurrent sparse autoencoders
B. Tolooshams, S. Dey, and D. Ba
IEEE MLSP 2018 [paper] [code] 
Robustness of frequency division technique for online myoelectric pattern recognition against contractionlevel variation
B. Tolooshams and N. Jiang
Frontiers in Bioengineering and Biotechnology 2017 [paper]

Interpretable unrolled dictionary learning networks
B. Tolooshams and D. Ba
DeepMath 2022 [paper] [slides] 
Unsupervised sparse deconvolutional learning of features driving neural activity
B. Tolooshams, H. Wu, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
COSYNE 2022 [paper] [poster] 
Unsupervised learning of a dictionary of neural impulse responses from spiking data
B. Tolooshams, H. Wu, P. Masset, V. N. Murthy, and D. Ba
COSYNE 2021 [paper] [poster] 
Convolutional dictionary learning of stimulus from spiking data
A. H. Song*, B. Tolooshams*, S. Temereanca, and D. Ba
COSYNE 2020 [paper] [poster]

Interpretable unrolled dictionary learning networks
Talk, Conference on the Mathematical Theory of Deep Neural Networks (DeepMath), 2022 
Design interpretable and efficient neural architectures for science and engineering
Talk, Anima AI + Science Lab at Caltech, 2022 
Deconvolution of multiplexed neural signals using interpretable deep learning
Talk, Uchida Lab at Harvard University, 2022 
Deep unrolled learning using bilevel optimizations
Talk, Vector Institute, 2022 
Deep unrolling for inverse problems
Talk, Poggio Lab at MIT, 2021 
Unfolded neural networks for implicit acceleration of dictionary learning
Talk, Amirkabir Artificial Intelligence Student Summit (AAISS) at Amirkabir University of Technology, 2021 
Perceptual stereo speech enhancement
Presentation, Applied Sciences Group at Microsoft, 2021 
Modelbased deep learning
Tutorial, IEEE ICASSP Conference, 2021 
Introduction to deep learning for computational neuroscience
Workshop, Neurosur, 2021 [github] 
Dictionary learning based autoencoders for inverse problems
Talk, Decision Theory  APMTH 231 at Harvard University, 2021 
On the relationship between dictionary learning and sparse autoencoders
Talk, Computational and Applied Math Seminar at Tufts University, 2020 
On the relationship between dictionary learning and sparse autoencoders
Talk, Pierre E. Jacob's Group at Harvard University, 2020 
Multichannel endtoend neural architectures for speech enhancement
Presentation, Amazon AI  AWS, 2019 
Autoencoders for unsupervised source separation
Talk, Decision Theory  APMTH 231 at Harvard University, 2019 
Statespace models and deep deconvolutional networks
Talk, CRISP Group at Harvard University, 2018