[link]
Lee et al. propose a generative model for obtaining confidencecalibrated classifiers. Neural networks are known to be overconfident in their predictions – not only on examples from the task’s data distribution, but also on other examples taken from different distributions. The authors propose a GANbased approach to force the classifier to predict uniform predictions on examples not taken from the data distribution. In particular, in addition to the target classifier, a generator and a discriminator are introduced. The generator generates “hard” outofdistribution examples; ideally these examples are close to the indistribution, i.e., the data distribution of the actual task. The discriminator is intended to distinguish between out and indistribution. The overall algorithm, including the necessary losses, is given in Algorithm 1. In experiments, the approach is shown to allow detecting outdistribution examples nearly perfectly. Examples of the generated “hard” outofdistribution samples are given in Figure 1. https://i.imgur.com/NmF0fpN.png Algorithm 1: The proposed joint training scheme of outdistribution generator $G$, the in/outdistribution discriminator $G$ and the original classifier providing $P_\theta$(yx)$ with parameters $\theta$. https://i.imgur.com/kAclSQz.png Figure 1: A comparison of a regular GAN (a and c) to the proposed framework (c and d). Clearly, the proposed approach generates outofdistribution samples (i.e., no meaningful digits) close to the original data distribution.
Your comment:
