Hanna Tseran | Machine Learning Researcher
  • About
  • Publications
  • News
  • CV
  • Recent & Upcoming Talks
    • Example Talk
  • Blog
    • 🎉 This website is up
    • KAKENHI 2025
  • Publications
    • Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape.
    • Expected Gradients of Maxout Networks and Consequences to Parameter Initialization.
    • On the Expected Complexity of Maxout Networks.
    • Natural Variational Continual Learning.
    • Memory augmented neural network with Gaussian embeddings for one-shot learning.
  • Projects
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Experience
  • Teaching
    • Learn JavaScript
    • Learn Python

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape.

June 1, 2024·
Kedar Karhadkar
,
Michael Murray
Hanna Tseran
Hanna Tseran
,
Guido Montúfar
· 0 min read
arxiv
Type
Paper-Journal
Last updated on June 1, 2024
Hanna Tseran
Authors
Hanna Tseran
Postdoctoral Researcher

Expected Gradients of Maxout Networks and Consequences to Parameter Initialization. February 1, 2023 →

© 2025 Hanna Tseran. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.