First published: 2018/09/05 (1 year ago) Abstract: Deep learning models show remarkable results in automated skin lesion
analysis. However, these models demand considerable amounts of data, while the
availability of annotated skin lesion images is often limited. Data
augmentation can expand the training dataset by transforming input images. In
this work, we investigate the impact of 13 data augmentation scenarios for
melanoma classification trained on three CNNs (Inception-v4, ResNet, and
DenseNet). Scenarios include traditional color and geometric transforms, and
more unusual augmentations such as elastic transforms, random erasing and a
novel augmentation that mixes different lesions. We also explore the use of
data augmentation at test-time and the impact of data augmentation on various
dataset sizes. Our results confirm the importance of data augmentation in both
training and testing and show that it can lead to more performance gains than
obtaining new images. The best scenario results in an AUC of 0.882 for melanoma
classification without using external data, outperforming the top-ranked
submission (0.874) for the ISIC Challenge 2017, which was trained with
_Disclaimer: I'm the first author of this paper._
The code for this paper can be found at https://github.com/fabioperez/skin-data-augmentation.
In this work, we wanted to compare different data augmentation scenarios for skin lesion analysis. We tried 13 scenarios, including commonly used augmentation techniques (color and geometry transformations), unusual ones (random erasing, elastic transformation, and a novel lesion mix to simulate collision lesions), and a combination of those.
Examples of the augmentation scenarios:
a) no augmentation
b) color (saturation, contrast, and brightness)
c) color (saturation, contrast, brightness, and hue)
d) affine (rotation, shear, scaling)
e) random flips
f) random crops
g) random erasing
i) lesion mix
j) basic set (f, d, e, c)
k) basic set + erasing (f, g, d, e, c)
l) basic set + elastic (f, d, h, e, c)
m) basic set + mix (i, f, d, e, c)
We used the ISIC 2017 Challenge dataset (2000 training images, 150 validation images, and 600 test images).
We tried three network architectures: Inception-v4, ResNet-152, and DenseNet-161.
We also compared different test-time data augmentation methods: a) no augmentation; b) 144-crops; c) same data augmentation as training (64 augmented copies of the original image). Final prediction was the average of all augmented predictions.
* Basic set (combination of commonly used augmentations) is the best scenario.
* Data augmentation at test-time is very beneficial.
* Elastic is better than no augmentation, but when compared incorporated to the basic set, decreases the performance.
* The best result was better than the winner of the challenge in 2017, without using ensembling.
* Test data augmentation is very similar with 144-crop, but takes less images during prediction (64 vs 144), so it's faster.
# Impact of data augmentation on dataset sizes
We also used the basic set scenarios on different dataset sizes by sampling random subsets of the original dataset, with sizes 1500, 1000, 500, 250 and 125.
* Using data augmentation can be better than using more data (but you should always use more data since the model can benefit from both). For instance, using 500 images with data augmentation on training and test for Inception is better than training with no data augmentation with 2000 images.
* ResNet and DenseNet works better than Inception for less data.
* Test-time data augmentation is always better than not augmenting on test-time.
* Using data augmentation on train only was worse than not augmenting at all in some cases.
First published: 2018/02/28 (2 years ago) Abstract: What makes humans so good at solving seemingly complex video games? Unlike
computers, humans bring in a great deal of prior knowledge about the world,
enabling efficient decision making. This paper investigates the role of human
priors for solving video games. Given a sample game, we conduct a series of
ablation studies to quantify the importance of various priors on human
performance. We do this by modifying the video game environment to
systematically mask different types of visual information that could be used by
humans as priors. We find that removal of some prior knowledge causes a drastic
degradation in the speed with which human players solve the game, e.g. from 2
minutes to over 20 minutes. Furthermore, our results indicate that general
priors, such as the importance of objects and visual consistency, are critical
for efficient game-play. Videos and the game manipulations are available at
Authors investigated why humans play some video games better than machines. That is the case for games that do not have continuous rewards (e.g., scores). They experimented with a game -- inspired by _Montezuma's Revenge_ -- in which the player has to climb stairs, collect keys and jump over enemies. RL algorithms can only know if they succeed if they finish the game, as there is no rewards during the gameplay, so they tend to do much worse than humans in these games.
To compare between humans and machines, they set up RL algorithms and recruite players from Amazon Mechanical Turk. Humans did much better than machines for the original game setup. However, authors wanted to check the impact of semantics and prior knowledge on humans performance. They set up scenarios with different levels of reduced semantic information, as shown in Figure 2.
This is what the game originally looked like:
And this is the version with lesser semantic clues:
You can try yourself in the [paper's website](https://rach0012.github.io/humanRL_website/).
Not surprisingly, humans took much more time to complete the game in scenarios with less semantic information, indicating that humans strongly rely on prior knowledge to play video games.
The authors argue that this prior knowledge should also be somehow included into RL algorithms in order to move their efficiency towards the human level.
## Additional reading
[Why humans learn faster than AI—for now](https://www.technologyreview.com/s/610434/why-humans-learn-faster-than-ai-for-now/).