Bahareh Tolooshams
Bahareh Tolooshams

PhD Candidate, Electrical Engineering
Harvard University

News
  • 11/2022: I submitted an abstract to COSYNE.
  • 03/2022: I have received the GSAS Student Council Spring Conference Grant at Harvard University.
  • 02/2022: I joined InTouch, a peer-to-peer support network to build community and provide support for graduate students.
  • 05/2021: I joined the Applied Sciences Group at Microsoft as a Research Intern.
  • 07/2019: I received AWS Machine Learning Research Awards (MLRA).
  • 04/2019: I joined Amazon AI as ML/DSP Research Intern.
  • 04/2019: I received QBio Student Fellowship.
  • 08/2018: I received QBio Student Award Competition Fellowship.
  • 07/2018: I received a travel grant for MLSP2018.
  • 09/2017: I started my graduate studies at CRISP Group at Harvard University.
  • 06/2017: I received my B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo.

I received B.ASc. degree with distinction in Electrical Engineering from the University of Waterloo in 2017. I am a PhD Candidate, advised by Demba Ba, at Harvard University. I spent summer 2019 at Amazon AI as an Applied Scientist Intern and joined the Applied Sciences Group at Microsoft as a Research Intern for summer 2021. I am the recipient of the awards including the Machine Learning Research Award from AWS and the QBio Student Fellowship from Harvard University.

Mentorship and community building: I actively mentor Harvard College students through the Women in STEM Mentorship program. I am also a mentor at InTouch, a peer-to-peer support network to build community and provide support for graduate students.

Research

My research interests are at the intersection of machine learning, optimization, statistical learning, and computational neuroscience.

My PhD research is divided into three tracks:

  • Interpretable deep learning: I am interested in a class of machine learning algorithms referred to as unrolled learning. This is to develop deep interpretable and efficient inference models based on iterative optimizations or generative models. For deep sparse coding unrolled networks, we can derive a mathematical relation between network weights, its predictions, and the training data. The relation helps to reason about representations of a new test example and extract similar/dissimilar data from the training set. Check out this work for the interpretability of deep unrolled networks as well as their theoretical properties for model recovery. Moreover, I have demonstrated the efficiency and state-of-the-art competitive performance of such architectures in the supervised task of Poisson image denoising in my recent ICML publication.
  • Computational neuroscience: I develop interpretable networks to learn features driving neural activities. The network is based on unrolled learning allowing its application to data limited regimes such as single neurons. We studied piriform cortex neurons in this abstract, and showcased a sparse deconvolutional method for the analysis of dopaminergic neurons in this abstract. For this track, I closely collaborate with Paul Masset from Murthy Lab.

During my two fantastic research internship experiences, I worked on speech enhancement. Here, I proposed channel-attention to improve multichannel speech enhancement. In another publication, a joint work with Kazuhito Koishida from Microsoft, I proposed a training framework for perceptual enhancement of stereo speech.

Publications
Journal and Conference Proceedings

  1. Learning filter-based compressed blind-deconvolution
    B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
    Submitted to IEEE Transactions on Signal Processing 2022 [paper]
  2. Discriminative reconstruction via simultaneous dense and sparse coding
    A. Tasissa, E. Theodosis, B. Tolooshams, and D. Ba
    Submitted to Information and Inference: A Journal of the IMA 2022 [paper] [code]
  3. Stable and interpretable unrolled dictionary learning
    B. Tolooshams and D. Ba
    TMLR 2022 [paper] [code]
  4. A training framework for stereo-aware speech enhancement using deep neural networks
    B. Tolooshams and K. Koishida
    IEEE ICASSP 2022 [paper] [slides] [poster]
  5. On the convergence of group-sparse autoencoders
    E. Theodosis, B. Tolooshams*, P. Tankala*, A. Tasissa, and D. Ba
    arXiv 2021 [paper]
  6. Gaussian process convolutional dictionary learning
    A. H. Song, B. Tolooshams, and D. Ba
    IEEE Signal Processing Letters 2021 [paper]
  7. Unfolding neural networks for compressive multichannel blind deconvolution
    B. Tolooshams*, S. Mulleti*, D. Ba, and Y. C. Eldar
    IEEE ICASSP 2021 [paper] [slides] [poster]
  8. Deep residual autoencoders for expectation maximization-inspired dictionary learning
    B. Tolooshams, S. Dey, and D. Ba
    IEEE Transactions on Neural Networks and Learning Systems 2021 [paper] [code]
  9. Convolutional dictionary learning based auto-encoders for natural exponential-family distributions
    B. Tolooshams*, A. H. Song*, S. Temereanca, and D. Ba
    ICML 2020 [paper] [code] [slides]
  10. Channel-attention dense u-net for multichannel speech enhancement
    B. Tolooshams, R. Giri, A. H. Song, U. Isik, and A. Krishnaswamy
    IEEE ICASSP 2020 [paper]
  11. Convolutional dictionary learning in hierarchical networks
    J. Zazo, B. Tolooshams, and D. Ba
    IEEE CAMSAP 2019 [paper]
  12. RandNet: deep learning with compressed measurements of images
    T. Chang*, B. Tolooshams*, and D. Ba
    IEEE MLSP 2019 [paper] [code] [poster]
  13. Scalable convolutional dictionary learning with constrained recurrent sparse auto-encoders
    B. Tolooshams, S. Dey, and D. Ba
    IEEE MLSP 2018 [paper] [code]
  14. Robustness of frequency division technique for online myoelectric pattern recognition against contraction-level variation
    B. Tolooshams and N. Jiang
    Frontiers in Bioengineering and Biotechnology 2017 [paper]
Abstracts

  1. Interpretable unrolled dictionary learning networks
    B. Tolooshams and D. Ba
    DeepMath 2022 [paper] [slides]
  2. Unsupervised sparse deconvolutional learning of features driving neural activity
    B. Tolooshams, H. Wu, N. Uchida, V. N. Murthy, P. Masset, and D. Ba
    COSYNE 2022 [paper] [poster]
  3. Unsupervised learning of a dictionary of neural impulse responses from spiking data
    B. Tolooshams, H. Wu, P. Masset, V. N. Murthy, and D. Ba
    COSYNE 2021 [paper] [poster]
  4. Convolutional dictionary learning of stimulus from spiking data
    A. H. Song*, B. Tolooshams*, S. Temereanca, and D. Ba
    COSYNE 2020 [paper] [poster]
Talks & Workshops
  1. Interpretable unrolled dictionary learning networks
    Talk, Conference on the Mathematical Theory of Deep Neural Networks (DeepMath), 2022
  2. Design interpretable and efficient neural architectures for science and engineering
    Talk, Anima AI + Science Lab at Caltech, 2022
  3. Deconvolution of multiplexed neural signals using interpretable deep learning
    Talk, Uchida Lab at Harvard University, 2022
  4. Deep unrolled learning using bilevel optimizations
    Talk, Vector Institute, 2022
  5. Deep unrolling for inverse problems
    Talk, Poggio Lab at MIT, 2021
  6. Unfolded neural networks for implicit acceleration of dictionary learning
    Talk, Amirkabir Artificial Intelligence Student Summit (AAISS) at Amirkabir University of Technology, 2021
  7. Perceptual stereo speech enhancement
    Presentation, Applied Sciences Group at Microsoft, 2021
  8. Model-based deep learning
    Tutorial, IEEE ICASSP Conference, 2021
  9. Introduction to deep learning for computational neuroscience
    Workshop, Neurosur, 2021 [github]
  10. Dictionary learning based autoencoders for inverse problems
    Talk, Decision Theory - APMTH 231 at Harvard University, 2021
  11. On the relationship between dictionary learning and sparse autoencoders
    Talk, Computational and Applied Math Seminar at Tufts University, 2020
  12. On the relationship between dictionary learning and sparse autoencoders
    Talk, Pierre E. Jacob's Group at Harvard University, 2020
  13. Multichannel end-to-end neural architectures for speech enhancement
    Presentation, Amazon AI - AWS, 2019
  14. Autoencoders for unsupervised source separation
    Talk, Decision Theory - APMTH 231 at Harvard University, 2019
  15. State-space models and deep deconvolutional networks
    Talk, CRISP Group at Harvard University, 2018