Regularizing Neural Networks by Penalizing Confident Output Distributions

arXiv e-Print archive - 2017 via Local arXiv

Keywords: cs.NE, cs.LG

more

arXiv e-Print archive - 2017 via Local arXiv

Keywords: cs.NE, cs.LG

How Can We Be So Dense? The Benefits of Using Highly Sparse Representations

arXiv e-Print archive - 2019 via Local Bibsonomy

Keywords: dblp

arXiv e-Print archive - 2019 via Local Bibsonomy

Keywords: dblp

Improved Techniques for Training GANs

arXiv e-Print archive - 2016 via Local Bibsonomy

Keywords: dblp

arXiv e-Print archive - 2016 via Local Bibsonomy

Keywords: dblp

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

arXiv e-Print archive - 2019 via Local arXiv

Keywords: cs.LG, stat.ML

more

arXiv e-Print archive - 2019 via Local arXiv

Keywords: cs.LG, stat.ML

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

arXiv e-Print archive - 2019 via Local Bibsonomy

Keywords: dblp

arXiv e-Print archive - 2019 via Local Bibsonomy

Keywords: dblp

Adding Gradient Noise Improves Learning for Very Deep Networks

arXiv e-Print archive - 2015 via Local arXiv

Keywords: stat.ML, cs.LG

more

arXiv e-Print archive - 2015 via Local arXiv

Keywords: stat.ML, cs.LG

On orthogonality and learning recurrent networks with long term dependencies

arXiv e-Print archive - 2017 via Local arXiv

Keywords: cs.LG, cs.NE

more

arXiv e-Print archive - 2017 via Local arXiv

Keywords: cs.LG, cs.NE

About