Hanna Tseran | Machine Learning Researcher
  • About
  • Publications
  • News
  • CV
  • Recent & Upcoming Talks
    • Example Talk
  • Blog
    • 🎉 This website is up
    • KAKENHI 2025
  • Publications
    • Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape.
    • Expected Gradients of Maxout Networks and Consequences to Parameter Initialization.
    • On the Expected Complexity of Maxout Networks.
    • Natural Variational Continual Learning.
    • Memory augmented neural network with Gaussian embeddings for one-shot learning.
  • Projects
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Experience
  • Teaching
    • Learn JavaScript
    • Learn Python

Natural Variational Continual Learning.

November 1, 2018·
Hanna Tseran
Hanna Tseran
,
Mohammad Emtiyaz Khan
,
Tatsuya Harada
,
Thang D. Bui
· 0 min read
Type
Conference paper
Last updated on November 1, 2018
Hanna Tseran
Authors
Hanna Tseran
Postdoctoral Researcher

← On the Expected Complexity of Maxout Networks. August 1, 2021
Memory augmented neural network with Gaussian embeddings for one-shot learning. November 1, 2017 →

© 2025 Hanna Tseran. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.