Hanna Tseran | Machine Learning Researcher
  • About
  • Publications
  • News
  • CV
  • Recent & Upcoming Talks
    • Example Talk
  • Blog
    • 🎉 This website is up
    • KAKENHI 2025
  • Publications
    • Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape.
    • Expected Gradients of Maxout Networks and Consequences to Parameter Initialization.
    • On the Expected Complexity of Maxout Networks.
    • Natural Variational Continual Learning.
    • Memory augmented neural network with Gaussian embeddings for one-shot learning.
  • Projects
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Experience
  • Teaching
    • Learn JavaScript
    • Learn Python

Memory augmented neural network with Gaussian embeddings for one-shot learning.

November 1, 2017·
Hanna Tseran
Hanna Tseran
,
Tatsuya Harada
· 0 min read
Type
Conference paper
Last updated on November 1, 2017
Hanna Tseran
Authors
Hanna Tseran
Postdoctoral Researcher

← Natural Variational Continual Learning. November 1, 2018

© 2025 Hanna Tseran. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.